Sample records for multiple design points

  1. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications

    PubMed Central

    Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831

  2. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    PubMed

    Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.

  3. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  4. Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials

    DTIC Science & Technology

    2017-04-06

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced

  5. Rotor redesign for a highly loaded 1800 ft/sec tip speed fan, 2

    NASA Technical Reports Server (NTRS)

    Bolt, C. R.

    1980-01-01

    Tests were conducted on a 0.5 hub/tip ratio single-stage fan designed to produce a pressure ratio of 2.280 at an efficiency of 83.8 percent with a rotor tip speed of 548.6 m/sec (1800 ft/sec). The rotor was designed utilizing a quasi three dimensional design system and four-part, multiple-circular-arc airfoil sections. The rotor is the third in a series of single-stage fans that have included a precompression airfoil design and a multiple-circular-arc airfoil design. The stage achieved a peak efficiency of 82.8 percent after performance had deteriorated by 0.6 of a point. The design mass flow was achieved at the peak efficiency point, and the stage total pressure ratio was 2.20, which is lower than the design goal of 2.28. The surge margin of 13% from the peak efficiency point exceeded the design goal of 7%.

  6. NULL Convention Floating Point Multiplier

    PubMed Central

    Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069

  7. NULL convention floating point multiplier.

    PubMed

    Albert, Anitha Juliette; Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.

  8. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays.

    PubMed

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-03-15

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target's point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment.

  9. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays

    PubMed Central

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-01-01

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target’s point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment. PMID:28294996

  10. Effect of differing PowerPoint slide design on multiple-choice test scores for assessment of knowledge and retention in a theriogenology course.

    PubMed

    Root Kustritz, Margaret V

    2014-01-01

    Third-year veterinary students in a required theriogenology diagnostics course were allowed to self-select attendance at a lecture in either the evening or the next morning. One group was presented with PowerPoint slides in a traditional format (T group), and the other group was presented with PowerPoint slides in the assertion-evidence format (A-E group), which uses a single sentence and a highly relevant graphic on each slide to ensure attention is drawn to the most important points in the presentation. Students took a multiple-choice pre-test, attended lecture, and then completed a take-home assignment. All students then completed an online multiple-choice post-test and, one month later, a different online multiple-choice test to evaluate retention. Groups did not differ on pre-test, assignment, or post-test scores, and both groups showed significant gains from pre-test to post-test and from pre-test to retention test. However, the T group showed significant decline from post-test to retention test, while the A-E group did not. Short-term differences between slide designs were most likely unaffected due to required coursework immediately after lecture, but retention of material was superior with the assertion-evidence slide design.

  11. Sparse matrix-vector multiplication on network-on-chip

    NASA Astrophysics Data System (ADS)

    Sun, C.-C.; Götze, J.; Jheng, H.-Y.; Ruan, S.-J.

    2010-12-01

    In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of the matrix and can be extremely irregular. Using the NoC architecture makes it possible to deal with arbitrary structure of the data transfers; i.e. with the irregular structure of the sparse matrices. So far, we have already implemented the proposed SMVM-NoC architecture with the size 4×4 and 5×5 in IEEE 754 single float point precision using FPGA.

  12. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  13. [Development of the automatic dental X-ray film processor].

    PubMed

    Bai, J; Chen, H

    1999-07-01

    This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.

  14. Nonlinear aerodynamic wing design

    NASA Technical Reports Server (NTRS)

    Bonner, Ellwood

    1985-01-01

    The applicability of new nonlinear theoretical techniques is demonstrated for supersonic wing design. The new technology was utilized to define outboard panels for an existing advanced tactical fighter model. Mach 1.6 maneuver point design and multi-operating point compromise surfaces were developed and tested. High aerodynamic efficiency was achieved at the design conditions. A corollary result was that only modest supersonic penalties were incurred to meet multiple aerodynamic requirements. The nonlinear potential analysis of a practical configuration arrangement correlated well with experimental data.

  15. On the design of a radix-10 online floating-point multiplier

    NASA Astrophysics Data System (ADS)

    McIlhenny, Robert D.; Ercegovac, Milos D.

    2009-08-01

    This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.

  16. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  17. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  18. The Effects of Point-of-View Video Modeling on Symbolic Play Actions and Play-Associated Language Utterances in Preschoolers with Autism

    ERIC Educational Resources Information Center

    Bonnet, Lauren Kravetz

    2012-01-01

    This single-subject research study was designed to examine the effects of point-of-view video modeling (POVM) on the symbolic play actions and play-associated language of four preschool students with autism. A multiple baseline design across participants was conducted in order to evaluate the effectiveness of using POVM as an intervention for…

  19. Constrained Multipoint Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David

    1997-01-01

    An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat complete configuration designs subject to multiple design points and geometric constraints. Examples are presented for both transonic and supersonic configurations ranging from wing alone designs to complex configuration designs involving wing, fuselage, nacelles and pylons.

  20. Design and Integration of an All-Magnetic Attitude Control System for FASTSAT-HSV01's Multiple Pointing Objectives

    NASA Technical Reports Server (NTRS)

    DeKock, Brandon; Sanders, Devon; Vanzwieten, Tannen; Capo-Lugo, Pedro

    2011-01-01

    The FASTSAT-HSV01 spacecraft is a microsatellite with magnetic torque rods as it sole attitude control actuator. FASTSAT s multiple payloads and mission functions require the Attitude Control System (ACS) to maintain Local Vertical Local Horizontal (LVLH)-referenced attitudes without spin-stabilization, while the pointing errors for some attitudes be significantly smaller than the previous best-demonstrated for this type of control system. The mission requires the ACS to hold multiple stable, unstable, and non-equilibrium attitudes, as well as eject a 3U CubeSat from an onboard P-POD and recover from the ensuing tumble. This paper describes the Attitude Control System, the reasons for design choices, how the ACS integrates with the rest of the spacecraft, and gives recommendations for potential future applications of the work.

  1. Multiple-Panel Cylindrical Solar Concentrator

    NASA Technical Reports Server (NTRS)

    Brown, E. M.

    1983-01-01

    Trough composed of many panels concentrates Sun's energy on solar cells, even when trough is not pointed directly at Sun. Tolerates deviation as great as 5 degrees from direction of sun. For terrestrial applications, multiple-flat-plate design offers potential cost reduction and ease of fabrication.

  2. A Survivable Wavelength Division Multiplexing Passive Optical Network with Both Point-to-Point Service and Broadcast Service Delivery

    NASA Astrophysics Data System (ADS)

    Ma, Xuejiao; Gan, Chaoqin; Deng, Shiqi; Huang, Yan

    2011-11-01

    A survivable wavelength division multiplexing passive optical network enabling both point-to-point service and broadcast service is presented and demonstrated. This architecture provides an automatic traffic recovery against feeder and distribution fiber link failure, respectively. In addition, it also simplifies the protection design for multiple services transmission in wavelength division multiplexing passive optical networks.

  3. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  4. Multiple point least squares equalization in a room

    NASA Technical Reports Server (NTRS)

    Elliott, S. J.; Nelson, P. A.

    1988-01-01

    Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.

  5. Early Design Choices: Capture, Model, Integrate, Analyze, Simulate

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2004-01-01

    I. Designs are constructed incrementally to meet requirements and solve problems: a) Requirements types: objectives, scenarios, constraints, ilities. etc. b) Problem/issue types: risk/safety, cost/difficulty, interaction, conflict, etc. II. Capture requirements, problems and solutions: a) Collect design and analysis products and make them accessible for integration and analysis; b) Link changes in design requirements, problems and solutions; and c) Harvest design data for design models and choice structures. III. System designs are constructed by multiple groups designing interacting subsystems a) Diverse problems, choice criteria, analysis methods and point solutions. IV. Support integration and global analysis of repercussions: a) System implications of point solutions; b) Broad analysis of interactions beyond totals of mass, cost, etc.

  6. Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.

    2014-01-01

    Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.

  7. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  8. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  9. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    PubMed

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  10. Fast underdetermined BSS architecture design methodology for real time applications.

    PubMed

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  11. Orbit Transfer Vehicle (OTV) advanced expander cycle engine point design study, volume 2

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The engine requirements are emphasized and include: high specific impulse within a restricted installed length constraint, long life, multiple starts, different thrust levels, and man-rated reliability. The engine operating characteristics and the major component analytical design are summarized.

  12. Wireless Computing Architecture III

    DTIC Science & Technology

    2013-09-01

    MIMO Multiple-Input and Multiple-Output MIMO /CON MIMO with concurrent hannel access and estimation MU- MIMO Multiuser MIMO OFDM Orthogonal...compressive sensing \\; a design for concurrent channel estimation in scalable multiuser MIMO networking; and novel networking protocols based on machine...Network, Antenna Arrays, UAV networking, Angle of Arrival, Localization MIMO , Access Point, Channel State Information, Compressive Sensing 16

  13. THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES

    PubMed Central

    Song, Chi; Min, Xiaoyi; Zhang, Heping

    2016-01-01

    The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239

  14. Pre-Veterinary Medical Grade Point Averages as Predictors of Academic Success in Veterinary College.

    ERIC Educational Resources Information Center

    Julius, Marcia F.; Kaiser, Herbert E.

    1978-01-01

    A five-year longitudinal study was designed to find the best predictors of academic success in veterinary school at Kansas State University and to set up a multiple regression formula to be used in selecting students. The preveterinary grade point average was found to be the best predictor. (JMD)

  15. Wind farm electrical system

    DOEpatents

    Erdman, William L.; Lettenmaier, Terry M.

    2006-07-04

    An approach to wind farm design using variable speed wind turbines with low pulse number electrical output. The output of multiple wind turbines are aggregated to create a high pulse number electrical output at a point of common coupling with a utility grid network. Power quality at each individual wind turbine falls short of utility standards, but the aggregated output at the point of common coupling is within acceptable tolerances for utility power quality. The approach for aggregating low pulse number electrical output from multiple wind turbines relies upon a pad mounted transformer at each wind turbine that performs phase multiplication on the output of each wind turbine. Phase multiplication converts a modified square wave from the wind turbine into a 6 pulse output. Phase shifting of the 6 pulse output from each wind turbine allows the aggregated output of multiple wind turbines to be a 24 pulse approximation of a sine wave. Additional filtering and VAR control is embedded within the wind farm to take advantage of the wind farm's electrical impedence characteristics to further enhance power quality at the point of common coupling.

  16. Demonstration of a Balloon Borne Arc-second Pointer Design

    NASA Astrophysics Data System (ADS)

    Deweese, K.; Ward, P.

    Many designs for utilizing stratospheric balloons as low-cost platforms on which to conduct space science experiments have been proposed throughout the years A major hurdle in extending the range of experiments for which these vehicles are useful has been the imposition of the gondola dynamics on the accuracy with which an instrument can be kept pointed at a celestial target A significant number of scientists have sought the ability to point their instruments with jitter in the arc-second range This paper presents the design and analysis of a stratospheric balloon borne pointing system that is able to meet this requirement The test results of a demonstration prototype of the design with similar ability are also presented Discussion of a high fidelity controller simulation for design analysis is presented The flexibility of the flight train is represented through generalized modal analysis A multiple controller scheme is utilized for coarse and fine pointing Coarse azimuth pointing is accomplished by an established pointing system with extensive flight history residing above the gondola structure A pitch-yaw gimbal mount is used for fine pointing providing orthogonal axes when nominally on target Fine pointing actuation is from direct drive dc motors eliminating backlash problems An analysis of friction nonlinearities and a demonstration of the necessity in eliminating static friction are provided A unique bearing hub design is introduced that eliminates static friction from the system dynamics A control scheme involving linear

  17. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  18. Brain MRI volumetry in a single patient with mild traumatic brain injury.

    PubMed

    Ross, David E; Castelvecchi, Cody; Ochs, Alfred L

    2013-01-01

    This letter to the editor describes the case of a 42 year old man with mild traumatic brain injury and multiple neuropsychiatric symptoms which persisted for a few years after the injury. Initial CT scans and MRI scans of the brain showed no signs of atrophy. Brain volume was measured using NeuroQuant®, an FDA-approved, commercially available software method. Volumetric cross-sectional (one point in time) analysis also showed no atrophy. However, volumetric longitudinal (two points in time) analysis showed progressive atrophy in several brain regions. This case illustrated in a single patient the principle discovered in multiple previous group studies, namely that the longitudinal design is more powerful than the cross-sectional design for finding atrophy in patients with traumatic brain injury.

  19. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  20. Experimental assessment and analysis of super-resolution in fluorescence microscopy based on multiple-point spread function fitting of spectrally demultiplexed images

    NASA Astrophysics Data System (ADS)

    Nishimura, Takahiro; Kimura, Hitoshi; Ogura, Yusuke; Tanida, Jun

    2018-06-01

    This paper presents an experimental assessment and analysis of super-resolution microscopy based on multiple-point spread function fitting of spectrally demultiplexed images using a designed DNA structure as a test target. For the purpose, a DNA structure was designed to have binding sites at a certain interval that is smaller than the diffraction limit. The structure was labeled with several types of quantum dots (QDs) to acquire their spatial information as spectrally encoded images. The obtained images are analyzed with a point spread function multifitting algorithm to determine the QD locations that indicate the binding site positions. The experimental results show that the labeled locations can be observed beyond the diffraction-limited resolution using three-colored fluorescence images that were obtained with a confocal fluorescence microscope. Numerical simulations show that labeling with eight types of QDs enables the positions aligned at 27.2-nm pitches on the DNA structure to be resolved with high accuracy.

  1. Satellite switched FDMA advanced communication technology satellite program

    NASA Technical Reports Server (NTRS)

    Atwood, S.; Higton, G. H.; Wood, K.; Kline, A.; Furiga, A.; Rausch, M.; Jan, Y.

    1982-01-01

    The satellite switched frequency division multiple access system provided a detailed system architecture that supports a point to point communication system for long haul voice, video and data traffic between small Earth terminals at Ka band frequencies at 30/20 GHz. A detailed system design is presented for the space segment, small terminal/trunking segment at network control segment for domestic traffic model A or B, each totaling 3.8 Gb/s of small terminal traffic and 6.2 Gb/s trunk traffic. The small terminal traffic (3.8 Gb/s) is emphasized, for the satellite router portion of the system design, which is a composite of thousands of Earth stations with digital traffic ranging from a single 32 Kb/s CVSD voice channel to thousands of channels containing voice, video and data with a data rate as high as 33 Mb/s. The system design concept presented, effectively optimizes a unique frequency and channelization plan for both traffic models A and B with minimum reorganization of the satellite payload transponder subsystem hardware design. The unique zoning concept allows multiple beam antennas while maximizing multiple carrier frequency reuse. Detailed hardware design estimates for an FDMA router (part of the satellite transponder subsystem) indicate a weight and dc power budget of 353 lbs, 195 watts for traffic model A and 498 lbs, 244 watts for traffic model B.

  2. Photonic crystals possessing multiple Weyl points and the experimental observation of robust surface states

    PubMed Central

    Chen, Wen-Jie; Xiao, Meng; Chan, C. T.

    2016-01-01

    Weyl points, as monopoles of Berry curvature in momentum space, have captured much attention recently in various branches of physics. Realizing topological materials that exhibit such nodal points is challenging and indeed, Weyl points have been found experimentally in transition metal arsenide and phosphide and gyroid photonic crystal whose structure is complex. If realizing even the simplest type of single Weyl nodes with a topological charge of 1 is difficult, then making a real crystal carrying higher topological charges may seem more challenging. Here we design, and fabricate using planar fabrication technology, a photonic crystal possessing single Weyl points (including type-II nodes) and multiple Weyl points with topological charges of 2 and 3. We characterize this photonic crystal and find nontrivial 2D bulk band gaps for a fixed kz and the associated surface modes. The robustness of these surface states against kz-preserving scattering is experimentally observed for the first time. PMID:27703140

  3. A model for incomplete longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C

    2008-12-30

    In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.

  4. Injection System for Multi-Well Injection Using a Single Pump

    PubMed Central

    Wovkulich, Karen; Stute, Martin; Protus, Thomas J.; Mailloux, Brian J.; Chillrud, Steven N.

    2015-01-01

    Many hydrological and geochemical studies rely on data resulting from injection of tracers and chemicals into groundwater wells. The even distribution of liquids to multiple injection points can be challenging or expensive, especially when using multiple pumps. An injection system was designed using one chemical metering pump to evenly distribute the desired influent simultaneously to 15 individual injection points through an injection manifold. The system was constructed with only one metal part contacting the fluid due to the low pH of the injection solutions. The injection manifold system was used during a three-month pilot scale injection experiment at the Vineland Chemical Company Superfund site. During the two injection phases of the experiment (Phase I = 0.27 L/min total flow, Phase II = 0.56 L/min total flow), flow measurements were made 20 times over three months; an even distribution of flow to each injection well was maintained (RSD <4%). This durable system is expandable to at least 16 injection points and should be adaptable to other injection experiments that require distribution of air-stable liquids to multiple injection points with a single pump. PMID:26140014

  5. Timelines Revisited: A Design Space and Considerations for Expressive Storytelling.

    PubMed

    Brehmer, Matthew; Lee, Bongshin; Bach, Benjamin; Riche, Nathalie Henry; Munzner, Tamara

    2017-09-01

    There are many ways to visualize event sequences as timelines. In a storytelling context where the intent is to convey multiple narrative points, a richer set of timeline designs may be more appropriate than the narrow range that has been used for exploratory data analysis by the research community. Informed by a survey of 263 timelines, we present a design space for storytelling with timelines that balances expressiveness and effectiveness, identifying 14 design choices characterized by three dimensions: representation, scale, and layout. Twenty combinations of these choices are viable timeline designs that can be matched to different narrative points, while smooth animated transitions between narrative points allow for the presentation of a cohesive story, an important aspect of both interactive storytelling and data videos. We further validate this design space by realizing the full set of viable timeline designs and transitions in a proof-of-concept sandbox implementation that we used to produce seven example timeline stories. Ultimately, this work is intended to inform and inspire the design of future tools for storytelling with timelines.

  6. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  7. Parallel multipoint recording of aligned and cultured neurons on corresponding Micro Channel Array toward on-chip cell analysis.

    PubMed

    Tonomura, W; Moriguchi, H; Jimbo, Y; Konishi, S

    2008-01-01

    This paper describes an advanced Micro Channel Array (MCA) so as to record neuronal network at multiple points simultaneously. Developed MCA is designed for neuronal network analysis which has been studied by co-authors using MEA (Micro Electrode Arrays) system. The MCA employs the principle of the extracellular recording. Presented MCA has the following advantages. First of all, the electrodes integrated around individual micro channels are electrically isolated for parallel multipoint recording. Sucking and clamping of cells through micro channels is expected to improve the cellular selectivity and S/N ratio. In this study, hippocampal neurons were cultured on the developed MCA. As a result, the spontaneous and evoked spike potential could be recorded by sucking and clamping the cells at multiple points. Herein, we describe the successful experimental results together with the design and fabrication of the advanced MCA toward on-chip analysis of neuronal network.

  8. Rapid Design of Gravity Assist Trajectories

    NASA Technical Reports Server (NTRS)

    Carrico, J.; Hooper, H. L.; Roszman, L.; Gramling, C.

    1991-01-01

    Several International Solar Terrestrial Physics (ISTP) missions require the design of complex gravity assisted trajectories in order to investigate the interaction of the solar wind with the Earth's magnetic field. These trajectories present a formidable trajectory design and optimization problem. The philosophy and methodology that enable an analyst to design and analyse such trajectories are discussed. The so called 'floating end point' targeting, which allows the inherently nonlinear multiple body problem to be solved with simple linear techniques, is described. The combination of floating end point targeting with analytic approximations with a Newton method targeter to achieve trajectory design goals quickly, even for the very sensitive double lunar swingby trajectories used by the ISTP missions, is demonstrated. A multiconic orbit integration scheme allows fast and accurate orbit propagation. A prototype software tool, Swingby, built for trajectory design and launch window analysis, is described.

  9. Fractional Programming for Communication Systems—Part I: Power Control and Beamforming

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.

  10. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  11. Design and Development of an Online Video Enhanced Case-Based Learning Environment for Teacher Education

    ERIC Educational Resources Information Center

    Saltan, Fatih; Özden, M. Yasar; Kiraz, Ercan

    2016-01-01

    People generally prefer to use stories in order to provide context when expressing a point. Spreading a message without context is unlikely to be meaningful. Like stories, cases have contextual meaning and allow learners to see a situation from multiple perspectives. The main purpose of the present study was to investigate how to design and…

  12. Copyfitting Instructor: A Computer-Assisted Instructional Aid for the Dirtiest Job in Advertising Design and Production Courses.

    ERIC Educational Resources Information Center

    Wesson, David A.

    Copyfitting is probably the least exciting portion of any course that deals with design and production of print advertising. Students find the transformation of manuscript copy into set type difficult to visualize. The math, though no more than multiplication and division, seems insurmountable to some--probably because the entities such as points,…

  13. Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses

    NASA Astrophysics Data System (ADS)

    Murphy, Christian E.

    2018-05-01

    Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.

  14. Nursing Reference Center: a point-of-care resource.

    PubMed

    Vardell, Emily; Paulaitis, Gediminas Geddy

    2012-01-01

    Nursing Reference Center is a point-of-care resource designed for the practicing nurse, as well as nursing administrators, nursing faculty, and librarians. Users can search across multiple resources, including topical Quick Lessons, evidence-based care sheets, patient education materials, practice guidelines, and more. Additional features include continuing education modules, e-books, and a new iPhone application. A sample search and comparison with similar databases were conducted.

  15. Report for simultaneous, multiple independently steered beam study for Airborne Electronically Steerable Phased Array (AESPA) program

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Design concepts of an array for the formation of multiple, simultaneous, independently pointed beams for satellite communication links were investigated through tradeoffs of various approaches which were conceived as possible solutions to the problem. After the preferred approach was selected, a more detailed design was configured and is presented as a candidate system that should be given further consideration for development leading to a preliminary design. This array uses an attenuator and a phase shifter with every element. The aperture excitation necessary to form the four beams is calculated and then placed across the array using these devices. Pattern analysis was performed for two beam and four beam cases with numerous patterns being presented. Parameter evaluation shown includes pointing accuracy and beam shape, sidelobe characteristics, gain control, and beam normalization. It was demonstrated that a 4 bit phase shifter and a 6 bit, 30 dB attenuator were sufficient to achieve adequate pattern performances. The phase amplitude steered multibeam array offers the flexibility of 1 to 4 beams with an increase in gain of 6 dB if only one beam is selected.

  16. Apollo: Giving application developers a single point of access to public health models using structured vocabularies and Web services

    PubMed Central

    Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417

  17. Apollo: giving application developers a single point of access to public health models using structured vocabularies and Web services.

    PubMed

    Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.

  18. Pareto fronts for multiobjective optimization design on materials data

    NASA Astrophysics Data System (ADS)

    Gopakumar, Abhijith; Balachandran, Prasanna; Gubernatis, James E.; Lookman, Turab

    Optimizing multiple properties simultaneously is vital in materials design. Here we apply infor- mation driven, statistical optimization strategies blended with machine learning methods, to address multi-objective optimization tasks on materials data. These strategies aim to find the Pareto front consisting of non-dominated data points from a set of candidate compounds with known character- istics. The objective is to find the pareto front in as few additional measurements or calculations as possible. We show how exploration of the data space to find the front is achieved by using uncer- tainties in predictions from regression models. We test our proposed design strategies on multiple, independent data sets including those from computations as well as experiments. These include data sets for Max phases, piezoelectrics and multicomponent alloys.

  19. Coexisting multiple attractors and riddled basins of a memristive system.

    PubMed

    Wang, Guangyi; Yuan, Fang; Chen, Guanrong; Zhang, Yu

    2018-01-01

    In this paper, a new memristor-based chaotic system is designed, analyzed, and implemented. Multistability, multiple attractors, and complex riddled basins are observed from the system, which are investigated along with other dynamical behaviors such as equilibrium points and their stabilities, symmetrical bifurcation diagrams, and sustained chaotic states. With different sets of system parameters, the system can also generate various multi-scroll attractors. Finally, the system is realized by experimental circuits.

  20. On the rational design of compressible flow ejectors

    NASA Technical Reports Server (NTRS)

    Ortwerth, P. J.

    1979-01-01

    A fluid mechanics review of chemical laser ejectors is presented. The characteristics of ejectors with single and multiple driver nozzles are discussed. Methods to compute an optimized performance map in which secondary Mach number and performance are computed versus mass ratio, to compute the flow distortion at each optimized condition, and to determine the thrust area for the design point to match diffuser impedence are examined.

  1. A Comparison of Three IRT Approaches to Examinee Ability Change Modeling in a Single-Group Anchor Test Design

    ERIC Educational Resources Information Center

    Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim

    2014-01-01

    Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…

  2. Ku-band multiple beam antenna

    NASA Technical Reports Server (NTRS)

    Chen, C. C.; Franklin, C. F.

    1980-01-01

    The frequency reuse capability is demonstrated for a Ku-band multiple beam antenna which provides contiguous low sidelobe spot beams for point-to-point communications between any two points within the continental United States (CONUS), or regional coverage beams for direct broadcast systems. A spot beam antenna in the 14/21 GHz band which provides contiguous overlapping beams covering CONUS and two discrete beams covering Hawaii and Alaska were designed, developed, and tested. Two reflector antennas are required for providing contiguous coverage of CONUS. Each is comprised of one offset parabolic reflector, one flat polarization diplexer, and two separate planar array feeds. This antenna system provides contiguous spot beam coverage of CONUS, utilizing 15 beams. Also designed, developed and demonstrated was a shaped contoured beam antenna system which provides contiguous four time zone coverage of CONUS from a single offset parabolic reflector incorporating one flat polarization diplexer and two separate planar array feeds. The beams which illuminate the eastern time zone and the mountain time zone are horizontally polarized, while the beams which illuminate the central time zone and the pacific time zone are vertically polarized. Frequency reuse is achieved by amplitude and polarization isolation.

  3. Design of multi-function sensor detection system in coal mine based on ARM

    NASA Astrophysics Data System (ADS)

    Ge, Yan-Xiang; Zhang, Quan-Zhu; Deng, Yong-Hong

    2017-06-01

    The traditional coal mine sensor in the specific measurement points, the number and type of channel will be greater than or less than the number of monitoring points, resulting in a waste of resources or cannot meet the application requirements, in order to enable the sensor to adapt to the needs of different occasions and reduce the cost, a kind of multi-functional intelligent sensor multiple sensors and ARM11 the S3C6410 processor is used to design and realize the dust, gas, temperature and humidity sensor functions together, and has storage, display, voice, pictures, data query, alarm and other new functions.

  4. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  5. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  6. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.

  7. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  8. Demonstration of a Balloon Borne Arc-Second Pointer Design

    NASA Technical Reports Server (NTRS)

    DeWeese, Keith D.; Ward, Philip R.

    2006-01-01

    Many designs for utilizing stratospheric balloons as low-cost platforms on which to conduct space science experiments have been proposed throughout the years. A major hurdle in extending the range of experiments for which these vehicles are useful has been the imposition of the gondola dynamics on the accuracy with which an instrument can be kept pointed at a celestial target. A significant number of scientists have sought the ability to point their instruments with jitter in the arc-second range. This paper presents the design and analysis of a stratospheric balloon borne pointing system that is able to meet this requirement. The test results of a demonstration prototype of the design with similar ability are also presented. Discussion of a high fidelity controller simulation for design analysis is presented. The flexibility of the flight train is represented through generalized modal analysis. A multiple controller scheme is utilized for coarse and fine pointing. Coarse azimuth pointing is accomplished by an established pointing system, with extensive flight history, residing above the gondola structure. A pitch-yaw gimbal mount is used for fine pointing, providing orthogonal axes when nominally on target. Fine pointing actuation is from direct drive dc motors, eliminating backlash problems. An analysis of friction nonlinearities and a demonstration of the necessity in eliminating static friction are provided. A unique bearing hub design is introduced that eliminates static friction from the system dynamics. A control scheme involving linear accelerometers for enhanced disturbance rejection is also presented. Results from a linear analysis of the total system and the high fidelity simulation are given. Results from a generalized demonstration prototype are presented. Commercial off-the-shelf (COTS) hardware was used to demonstrate the efficacy and performance of the pointer design for a mock instrument. Sub-arcsecond pointing ability from a ground hang test setup is shown from the testing results. This paper establishes that the proposed control strategy can be made robustly stable with significant design margins. Also demonstrated is the efficacy of the proposed system in rejecting disturbances larger than those considered realistic. The system is implemented and demonstrates sub arc second pointing ability using COTS hardware. Finally, we see that sub arc-second pointing stability can be achieved for a large instrument pointing at an inertial target.

  9. Expert system for generating initial layouts of zoom systems with multiple moving lens groups

    NASA Astrophysics Data System (ADS)

    Cheng, Xuemin; Wang, Yongtian; Hao, Qun; Sasián, José M.

    2005-01-01

    An expert system is developed for the automatic generation of initial layouts for the design of zoom systems with multiple moving lens groups. The Gaussian parameters of the zoom system are optimized using the damped-least-squares method to achieve smooth zoom cam curves, with the f-number of each lens group in the zoom system constrained to a rational value. Then each lens group is selected automatically from a database according to its range of f-number, field of view, and magnification ratio as it is used in the zoom system. The lens group database is established from the results of analyzing thousands of zoom lens patents. Design examples are given, which show that the scheme is a practical approach to generate starting points for zoom lens design.

  10. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    NASA Astrophysics Data System (ADS)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  11. Mapping ecosystem service indicators in a Great Lakes estuarine Area of Concern

    EPA Science Inventory

    Estuaries provide multiple ecosystem services from which humans benefit. Currently, thirty-six Great Lakes estuaries in the United States and Canada are designated as Areas of Concern (AOCs) due to a legacy of chemical contamination, degraded habitat, and non-point-source polluti...

  12. Floating-Point Numerical Function Generators Using EVMDDs for Monotone Elementary Functions

    DTIC Science & Technology

    2009-01-01

    Villa, R. K. Brayton , and A. L. Sangiovanni- Vincentelli, “Multi-valued decision diagrams: Theory and appli- cations,” Multiple-Valued Logic: An...Shmerko, and R. S. Stankovic, Decision Diagram Techniques for Micro- and Na- noelectronic Design, CRC Press, Taylor & Francis Group, 2006. Appendix

  13. Research on reform plan of civil engineering adult education graduation design

    NASA Astrophysics Data System (ADS)

    Su, Zhibin; Sun, Shengnan; Cui, Shicai

    2017-12-01

    As for civil engineering adult education graduation design, reform program is put forward combined with our school. The main points of reform include the following aspects. New pattern of graduation design which is consisted of basic training of engineering design, technical application and engineering innovation training is formed. Integration model of graduation design and employment is carried out. Multiple professional guidance graduation design pattern is put forward. Subject of graduation design is chosen based on the school actual circumstance. A “three stage” quality monitoring system is established. Performance evaluation pattern that concludes two oral examinations of the dissertation is strictly carried out.

  14. Optical multiple access techniques for on-board routing

    NASA Technical Reports Server (NTRS)

    Mendez, Antonio J.; Park, Eugene; Gagliardi, Robert M.

    1992-01-01

    The purpose of this research contract was to design and analyze an optical multiple access system, based on Code Division Multiple Access (CDMA) techniques, for on board routing applications on a future communication satellite. The optical multiple access system was to effect the functions of a circuit switch under the control of an autonomous network controller and to serve eight (8) concurrent users at a point to point (port to port) data rate of 180 Mb/s. (At the start of this program, the bit error rate requirement (BER) was undefined, so it was treated as a design variable during the contract effort.) CDMA was selected over other multiple access techniques because it lends itself to bursty, asynchronous, concurrent communication and potentially can be implemented with off the shelf, reliable optical transceivers compatible with long term unattended operations. Temporal, temporal/spatial hybrids and single pulse per row (SPR, sometimes termed 'sonar matrices') matrix types of CDMA designs were considered. The design, analysis, and trade offs required by the statement of work selected a temporal/spatial CDMA scheme which has SPR properties as the preferred solution. This selected design can be implemented for feasibility demonstration with off the shelf components (which are identified in the bill of materials of the contract Final Report). The photonic network architecture of the selected design is based on M(8,4,4) matrix codes. The network requires eight multimode laser transmitters with laser pulses of 0.93 ns operating at 180 Mb/s and 9-13 dBm peak power, and 8 PIN diode receivers with sensitivity of -27 dBm for the 0.93 ns pulses. The wavelength is not critical, but 830 nm technology readily meets the requirements. The passive optical components of the photonic network are all multimode and off the shelf. Bit error rate (BER) computations, based on both electronic noise and intercode crosstalk, predict a raw BER of (10 exp -3) when all eight users are communicating concurrently. If better BER performance is required, then error correction codes (ECC) using near term electronic technology can be used. For example, the M(8,4,4) optical code together with Reed-Solomon (54,38,8) encoding provides a BER of better than (10 exp -11). The optical transceiver must then operate at 256 Mb/s with pulses of 0.65 ns because the 'bits' are now channel symbols.

  15. Writing Multiple Choice Outcome Questions to Assess Knowledge and Competence.

    PubMed

    Brady, Erik D

    2015-11-01

    Few articles contemplate the need for good guidance in question item-writing in the continuing education (CE) space. Although many of the core principles of sound item design translate to the CE health education team, the need exists for specific examples for nurse educators that clearly describe how to measure changes in competence and knowledge using multiple choice items. In this article, some keys points and specific examples for nursing CE providers are shared. Copyright 2015, SLACK Incorporated.

  16. Jumping to Quadratic Models

    ERIC Educational Resources Information Center

    Gunter, Devon

    2016-01-01

    It is no easy feat to engage young people with abstract material as well as push them to greater depths of understanding. Add in the extra pressures of curriculum expectations and standards and the problem is exacerbated. Projects designed around standards and having multiple entry points clearly offer students the best opportunity to engage with…

  17. 32 CFR 525.4 - Entry authorization (policy).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...

  18. 32 CFR 525.4 - Entry authorization (policy).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...

  19. FireProt: web server for automated design of thermostable proteins

    PubMed Central

    Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas

    2017-01-01

    Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074

  20. Expression and immunogenicity of novel subunit enterovirus 71 VP1 antigens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Juan; Department of Microbiology and Immunology, Nanjing Medical University; Wang, Shixia

    Highlights: Black-Right-Pointing-Pointer EV71 is a major emerging infectious disease in many Asian countries. Black-Right-Pointing-Pointer Inactivated EV71 vaccines are in clinical studies but their safety and efficacy are unknown. Black-Right-Pointing-Pointer Developing subunit based EV71 vaccines is significant and novel antigen design is needed. Black-Right-Pointing-Pointer DNA immunization is an efficient tool to test the immunogenicity of VP1 based EV71 vaccines. Black-Right-Pointing-Pointer Multiple VP1 antigens are developed showing immunogenic potential. -- Abstract: Hand, foot, and mouth disease (HFMD) is a common viral illness in young children. HFMD is caused by viruses belonging to the enterovirus genus of the picornavirus family. Recently, enterovirus 71more » (EV71) has emerged as a virulent agent for HFMD with severe clinical outcomes. In the current report, we conducted a pilot antigen engineering study to optimize the expression and immunogenicity of subunit VP1 antigen for the design of EV71 vaccines. DNA immunization was adopted as a simple technical approach to test different designs of VP1 antigens without the need to express VP1 protein in vitro first. Our studies indicated that the expression and immunogenicity of VP1 protein can be improved with alternated VP1 antigen designs. Data presented in the current report revealed novel pathways to optimize the design of VP1 antigen-based EV71 vaccines.« less

  1. Centrifugal multiplexing fixed-volume dispenser on a plastic lab-on-a-disk for parallel biochemical single-end-point assays

    PubMed Central

    La, Moonwoo; Park, Sang Min; Kim, Dong Sung

    2015-01-01

    In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516

  2. Decomposition-Based Decision Making for Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas K.; Mavris, DImitri N.

    2005-01-01

    Most practical engineering systems design problems have multiple and conflicting objectives. Furthermore, the satisfactory attainment level for each objective ( requirement ) is likely uncertain early in the design process. Systems with long design cycle times will exhibit more of this uncertainty throughout the design process. This is further complicated if the system is expected to perform for a relatively long period of time, as now it will need to grow as new requirements are identified and new technologies are introduced. These points identify a need for a systems design technique that enables decision making amongst multiple objectives in the presence of uncertainty. Traditional design techniques deal with a single objective or a small number of objectives that are often aggregates of the overarching goals sought through the generation of a new system. Other requirements, although uncertain, are viewed as static constraints to this single or multiple objective optimization problem. With either of these formulations, enabling tradeoffs between the requirements, objectives, or combinations thereof is a slow, serial process that becomes increasingly complex as more criteria are added. This research proposal outlines a technique that attempts to address these and other idiosyncrasies associated with modern aerospace systems design. The proposed formulation first recasts systems design into a multiple criteria decision making problem. The now multiple objectives are decomposed to discover the critical characteristics of the objective space. Tradeoffs between the objectives are considered amongst these critical characteristics by comparison to a probabilistic ideal tradeoff solution. The proposed formulation represents a radical departure from traditional methods. A pitfall of this technique is in the validation of the solution: in a multi-objective sense, how can a decision maker justify a choice between non-dominated alternatives? A series of examples help the reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches.

  3. Cooperative path following control of multiple nonholonomic mobile robots.

    PubMed

    Cao, Ke-Cai; Jiang, Bin; Yue, Dong

    2017-11-01

    Cooperative path following control problem of multiple nonholonomic mobile robots has been considered in this paper. Based on the framework of decomposition, the cooperative path following problem has been transformed into path following problem and cooperative control problem; Then cascaded theory of non-autonomous system has been employed in the design of controllers without resorting to feedback linearization. One time-varying coordinate transformation based on dilation has been introduced to solve the uncontrollable problem of nonholonomic robots when the whole group's reference converges to stationary point. Cooperative path following controllers for nonholonomic robots have been proposed under persistent reference or reference target that converges to stationary point respectively. Simulation results using Matlab have illustrated the effectiveness of the obtained theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. How to Construct a Mixed Methods Research Design.

    PubMed

    Schoonenboom, Judith; Johnson, R Burke

    2017-01-01

    This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.

  5. Rise to SUMMIT: the Sydney University Multiple-Mirror Telescope

    NASA Astrophysics Data System (ADS)

    Moore, Anna M.; Davis, John

    2000-07-01

    The Sydney University Multiple Mirror Telescope (SUMMIT) is a medium-sized telescope designed specifically for high resolution stellar spectroscopy. Throughout the design emphasis has been placed on high efficiency at low cost. The telescope consists of four 0.46 m diameter mirrors mounted on a single welded steel frame. Specially designed mirror cells support and point each mirror, allowing accurate positioning of the images on optical fibers located at the foci of the mirrors. Four fibers convey the light to the future location of a high resolution spectrograph away from the telescope in a stable environment. An overview of the commissioning of the telescope is presented, including the guidance and automatic mirror alignment and focussing systems. SUMMIT is located alongside the Sydney University Stellar Interferometer at the Paul Wild Observatory, near Narrabri, Northern New South Wales.

  6. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  7. Theoretical lower bounds for parallel pipelined shift-and-add constant multiplications with n-input arithmetic operators

    NASA Astrophysics Data System (ADS)

    Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana

    2017-12-01

    New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.

  8. Multiplicity: discussion points from the Statisticians in the Pharmaceutical Industry multiplicity expert group.

    PubMed

    Phillips, Alan; Fletcher, Chrissie; Atkinson, Gary; Channon, Eddie; Douiri, Abdel; Jaki, Thomas; Maca, Jeff; Morgan, David; Roger, James Henry; Terrill, Paul

    2013-01-01

    In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one-day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Accessing the exceptional points of parity-time symmetric acoustics

    PubMed Central

    Shi, Chengzhi; Dubois, Marc; Chen, Yun; Cheng, Lei; Ramezani, Hamidreza; Wang, Yuan; Zhang, Xiang

    2016-01-01

    Parity-time (PT) symmetric systems experience phase transition between PT exact and broken phases at exceptional point. These PT phase transitions contribute significantly to the design of single mode lasers, coherent perfect absorbers, isolators, and diodes. However, such exceptional points are extremely difficult to access in practice because of the dispersive behaviour of most loss and gain materials required in PT symmetric systems. Here we introduce a method to systematically tame these exceptional points and control PT phases. Our experimental demonstration hinges on an active acoustic element that realizes a complex-valued potential and simultaneously controls the multiple interference in the structure. The manipulation of exceptional points offers new routes to broaden applications for PT symmetric physics in acoustics, optics, microwaves and electronics, which are essential for sensing, communication and imaging. PMID:27025443

  10. Training Policy Students to Hit the Ground Running: The Design of an Integrative Core Course

    ERIC Educational Resources Information Center

    Chetkovich, Carol; Henderson, Mark

    2014-01-01

    Effective public policy education must prepare students both to integrate the lessons of multiple disciplines and to apply these across diverse substantive areas. How can these objectives best be accomplished? Research on adult learning and professional education points toward applied, problem-based, cooperative, and student-driven pedagogy. This…

  11. Community Destruction and Traumatic Stress in Post-Tsunami Indonesia

    ERIC Educational Resources Information Center

    Frankenberg, Elizabeth; Nobles, Jenna; Sumantri, Cecep

    2012-01-01

    How are individuals affected when the communities they live in change for the worse? This question is central to understanding neighborhood effects, but few study designs generate estimates that can be interpreted causally. We address issues of inference through a natural experiment, examining post-traumatic stress at multiple time points in a…

  12. Improving Problem-Solving Performance of Students with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Taber-Doughty, Teresa

    2017-01-01

    The effectiveness of a multicomponent intervention to improve the problem-solving performance of students with autism spectrum disorders (ASD) during vocational tasks was examined. A multiple-probe across-students design was used to illustrate the effectiveness of point-of-view video modeling paired with practice sessions and a self-operated cue…

  13. Parallel multipoint recording of aligned and cultured neurons on micro channel array toward cellular network analysis.

    PubMed

    Tonomura, Wataru; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Konishi, Satoshi

    2010-08-01

    This paper describes an advanced Micro Channel Array (MCA) for recording electrophysiological signals of neuronal networks at multiple points simultaneously. The developed MCA is designed for neuronal network analysis which has been studied by the co-authors using the Micro Electrode Arrays (MEA) system, and employs the principles of extracellular recordings. A prerequisite for extracellular recordings with good signal-to-noise ratio is a tight contact between cells and electrodes. The MCA described herein has the following advantages. The electrodes integrated around individual micro channels are electrically isolated to enable parallel multipoint recording. Reliable clamping of a targeted cell through micro channels is expected to improve the cellular selectivity and the attachment between the cell and the electrode toward steady electrophysiological recordings. We cultured hippocampal neurons on the developed MCA. As a result, the spontaneous and evoked spike potentials could be recorded by sucking and clamping the cells at multiple points. In this paper, we describe the design and fabrication of the MCA and the successful electrophysiological recordings leading to the development of an effective cellular network analysis device.

  14. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  15. User localization in complex environments by multimodal combination of GPS, WiFi, RFID, and pedometer technologies.

    PubMed

    Dao, Trung-Kien; Nguyen, Hung-Long; Pham, Thanh-Thuy; Castelli, Eric; Nguyen, Viet-Tung; Nguyen, Dinh-Van

    2014-01-01

    Many user localization technologies and methods have been proposed for either indoor or outdoor environments. However, each technology has its own drawbacks. Recently, many researches and designs have been proposed to build a combination of multiple localization technologies system which can provide higher precision results and solve the limitation in each localization technology alone. In this paper, a conceptual design of a general localization platform using combination of multiple localization technologies is introduced. The combination is realized by dividing spaces into grid points. To demonstrate this platform, a system with GPS, RFID, WiFi, and pedometer technologies is established. Experiment results show that the accuracy and availability are improved in comparison with each technology individually.

  16. User Localization in Complex Environments by Multimodal Combination of GPS, WiFi, RFID, and Pedometer Technologies

    PubMed Central

    Dao, Trung-Kien; Nguyen, Hung-Long; Pham, Thanh-Thuy; Nguyen, Viet-Tung; Nguyen, Dinh-Van

    2014-01-01

    Many user localization technologies and methods have been proposed for either indoor or outdoor environments. However, each technology has its own drawbacks. Recently, many researches and designs have been proposed to build a combination of multiple localization technologies system which can provide higher precision results and solve the limitation in each localization technology alone. In this paper, a conceptual design of a general localization platform using combination of multiple localization technologies is introduced. The combination is realized by dividing spaces into grid points. To demonstrate this platform, a system with GPS, RFID, WiFi, and pedometer technologies is established. Experiment results show that the accuracy and availability are improved in comparison with each technology individually. PMID:25147866

  17. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  18. Designs for Evaluating the Community-Level Impact of Comprehensive Prevention Programs: Examples from the CDC Centers of Excellence in Youth Violence Prevention.

    PubMed

    Farrell, Albert D; Henry, David; Bradshaw, Catherine; Reischl, Thomas

    2016-04-01

    This article discusses the opportunities and challenges of developing research designs to evaluate the impact of community-level prevention efforts. To illustrate examples of evaluation designs, we describe six projects funded by the Centers for Disease Control and Prevention to evaluate multifaceted approaches to reduce youth violence in high-risk communities. Each of these projects was designed to evaluate the community-level impact of multiple intervention strategies to address individual and contextual factors that place youth at risk for violent behavior. Communities differed across projects in their setting, size, and how their boundaries were defined. Each project is using multiple approaches to compare outcomes in one or more intervention communities to those in comparison communities. Five of the projects are using comparative interrupted time-series designs to compare outcomes in an intervention community to matched comparison communities. A sixth project is using a multiple baseline design in which the order and timing of intervention activities is randomized across three communities. All six projects are also using regression point displacement designs to compare outcomes within intervention communities to those within broader sets of similar communities. Projects are using a variety of approaches to assess outcomes including archival records, surveys, and direct observations. We discuss the strengths and weaknesses of the designs of these projects and illustrate the challenges of designing high-quality evaluations of comprehensive prevention approaches implemented at the community level.

  19. Balloon Borne Arc-Second Pointer Feasibility Study

    NASA Technical Reports Server (NTRS)

    Ward, Philip R.; DeWeese, Keith D.

    2003-01-01

    For many years scientists have been utilizing stratospheric balloons as low-cost platforms on which to conduct space science experiments. A major hurdle in extending the range of experiments for which these vehicles are useful has been the imposition of the gondola dynamics on the accuracy with which an instrument can be kept pointed at a celestial target. A significant number of scientists have sought the ability to point their instruments with jitter in the arc-second range. This paper presents the design and analysis of a stratospheric balloon borne pointing system that is able to meet this requirement. The foundation for a high fidelity controller simulation is presented. The flexibility of the flight train is represented through generalized modal analysis. A multiple controller scheme is introduced for coarse and fine pointing. Coarse azimuth pointing is accomplished by an established pointing system, with extensive flight history, residing above the gondola structure. A pitch-yaw gimbal mount is used for fine pointing, providing orthogonal axes when nominally on target. Fine pointing actuation is from direct drive dc motors, eliminating backlash problems. An analysis of friction nonlinearities and a demonstration of the necessity in eliminating static fiction are provided. A unique bearing hub design is introduced that eliminates static fiction from the system dynamics. A control scheme involving linear accelerometers for enhanced disturbance rejection is also presented. Results from a linear analysis of the total system and the high fidelity simulation are given. This paper establishes that the proposed control strategy can be made robustly stable with significant design margins. Also demonstrated is the efficacy of the proposed system in rejecting disturbances larger than those considered realistic. Finally, we see that sub arc-second pointing stability can be achieved for a large instrument pointing at an inertial target.

  20. High-speed two-dimensional laser scanner based on Bragg gratings stored in photothermorefractive glass.

    PubMed

    Yaqoob, Zahid; Arain, Muzammil A; Riza, Nabeel A

    2003-09-10

    A high-speed free-space wavelength-multiplexed optical scanner with high-speed wavelength selection coupled with narrowband volume Bragg gratings stored in photothermorefractive (PTR) glass is reported. The proposed scanner with no moving parts has a modular design with a wide angular scan range, accurate beam pointing, low scanner insertion loss, and two-dimensional beam scan capabilities. We present a complete analysis and design procedure for storing multiple tilted Bragg-grating structures in a single PTR glass volume (for normal incidence) in an optimal fashion. Because the scanner design is modular, many PTR glass volumes (each having multiple tilted Bragg-grating structures) can be stacked together, providing an efficient throughput with operations in both the visible and the infrared (IR) regions. A proof-of-concept experimental study is conducted with four Bragg gratings in independent PTR glass plates, and both visible and IR region scanner operations are demonstrated.

  1. Implementation of kernels on the Maestro processor

    NASA Astrophysics Data System (ADS)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  2. Control system design for the large space systems technology reference platform

    NASA Technical Reports Server (NTRS)

    Edmunds, R. S.

    1982-01-01

    Structural models and classical frequency domain control system designs were developed for the large space systems technology (LSST) reference platform which consists of a central bus structure, solar panels, and platform arms on which a variety of experiments may be mounted. It is shown that operation of multiple independently articulated payloads on a single platform presents major problems when subarc second pointing stability is required. Experiment compatibility will be an important operational consideration for systems of this type.

  3. On-board B-ISDN fast packet switching architectures. Phase 1: Study

    NASA Technical Reports Server (NTRS)

    Faris, Faris; Inukai, Thomas; Lee, Fred; Paul, Dilip; Shyy, Dong-Jye

    1993-01-01

    The broadband integrate services digital network (B-ISDN) is an emerging telecommunications technology that will meet most of the telecommunications networking needs in the mid-1990's to early next century. The satellite-based system is well positioned for providing B-ISDN service with its inherent capabilities of point-to-multipoint and broadcast transmission, virtually unlimited connectivity between any two points within a beam coverage, short deployment time of communications facility, flexible and dynamic reallocation of space segment capacity, and distance insensitive cost. On-board processing satellites, particularly in a multiple spot beam environment, will provide enhanced connectivity, better performance, optimized access and transmission link design, and lower user service cost. The following are described: the user and network aspects of broadband services; the current development status in broadband services; various satellite network architectures including system design issues; and various fast packet switch architectures and their detail designs.

  4. Spatial judgments in the horizontal and vertical planes from different vantage points.

    PubMed

    Prytz, Erik; Scerbo, Mark W

    2012-01-01

    Todorović (2008 Perception 37 106-125) reported that there are systematic errors in the perception of 3-D space when viewing 2-D linear perspective drawings depending on the observer's vantage point. Because these findings were restricted to the horizontal plane, the current study was designed to determine the nature of these errors in the vertical plane. Participants viewed an image containing multiple colonnades aligned on parallel converging lines receding to a vanishing point. They were asked to judge where, in the physical room, the next column should be placed. The results support Todorović in that systematic deviations in the spatial judgments depended on vantage point for both the horizontal and vertical planes. However, there are also marked differences between the two planes. While judgments in both planes failed to compensate adequately for the vantage-point shift, the vertical plane induced greater distortions of the stimulus image itself within each vantage point.

  5. Formation tracker design of multiple mobile robots with wheel perturbations: adaptive output-feedback approach

    NASA Astrophysics Data System (ADS)

    Yoo, Sung Jin

    2016-11-01

    This paper presents a theoretical design approach for output-feedback formation tracking of multiple mobile robots under wheel perturbations. It is assumed that these perturbations are unknown and the linear and angular velocities of the robots are unmeasurable. First, adaptive state observers for estimating unmeasurable velocities of the robots are developed under the robots' kinematics and dynamics including wheel perturbation effects. Then, we derive a virtual-structure-based formation tracker scheme according to the observer dynamic surface design procedure. The main difficulty of the output-feedback control design is to manage the coupling problems between unmeasurable velocities and unknown wheel perturbation effects. These problems are avoided by using the adaptive technique and the function approximation property based on fuzzy logic systems. From the Lyapunov stability analysis, it is shown that point tracking errors of each robot and synchronisation errors for the desired formation converge to an adjustable neighbourhood of the origin, while all signals in the controlled closed-loop system are semiglobally uniformly ultimately bounded.

  6. Multiple Access Points within the Online Classroom: Where Students Look for Information

    ERIC Educational Resources Information Center

    Steele, John; Nordin, Eric J.; Larson, Elizabeth; McIntosh, Daniel

    2017-01-01

    The purpose of this study is to examine the impact of information placement within the confines of the online classroom architecture. Also reviewed was the impact of other variables such as course design, teaching presence and student patterns in looking for information. The sample population included students from a major online university in…

  7. Extending Research on a Computer-Based Flashcard Reading Intervention to Postsecondary Students with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Cazzell, Samantha; Browarnik, Brooke; Skinner, Amy; Skinner, Christopher; Cihak, David; Ciancio, Dennis; McCurdy, Merilee; Forbes, Bethany

    2016-01-01

    A multiple-baseline across-students design was used to evaluate the effects of a computer-based flashcard reading (CFR) intervention, developed using Microsoft PowerPoint software, on students' ability to read health-related words within 3 seconds. The students were three adults with intellectual disabilities enrolled in a postsecondary college…

  8. NLS Handbook, 2005. National Longitudinal Surveys

    ERIC Educational Resources Information Center

    Bureau of Labor Statistics, 2006

    2006-01-01

    The National Longitudinal Surveys (NLS), sponsored by the U.S. Bureau of Labor Statistics (BLS), are a set of surveys designed to gather information at multiple points in time on the labor market experiences of groups of men and women. Each of the cohorts has been selected to represent all people living in the United States at the initial…

  9. Video-Based Intervention in Teaching Fraction Problem-Solving to Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Hornberger, Erin

    2015-01-01

    The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with…

  10. System Design and Cataloging Meet the User: User Interfaces to Online Public Access Catalogs.

    ERIC Educational Resources Information Center

    Yee, Martha M.

    1991-01-01

    Discusses features of online public access catalogs: (1) demonstration of relationships between records; (2) provision of entry vocabularies; (3) arrangement of multiple entries on the screen; (4) provision of access points; (5) display of single records; and (6) division of catalogs into separate files or indexes. User studies and other research…

  11. A Cognitively-Based Communication Curriculum for Persons with Multiple Handicaps Functioning between 0-24 Months Developmentally.

    ERIC Educational Resources Information Center

    McMullen, Victoria B.

    This curriculum provides a sequence of activities designed to help develop cognitive and communication skills in severely and profoundly multi-handicapped individuals who are functioning between 0 and 24 months. Based on the principles that communication begins at birth and that educational programming must begin at the point where the handicapped…

  12. Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype

    NASA Astrophysics Data System (ADS)

    Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille

    2012-06-01

    The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.

  13. Symmetric caging formation for convex polygonal object transportation by multiple mobile robots based on fuzzy sliding mode control.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2016-01-01

    In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Recent enhancements to the GRIDGEN structured grid generation system

    NASA Technical Reports Server (NTRS)

    Steinbrenner, John P.; Chawner, John R.

    1992-01-01

    Significant enhancements are being implemented into the GRIDGEN3D, multiple block, structured grid generation software. Automatic, point-to-point, interblock connectivity will be possible through the addition of the domain entity to GRIDBLOCK's block construction process. Also, the unification of GRIDGEN2D and GRIDBLOCK has begun with the addition of edge grid point distribution capability to GRIDBLOCK. The geometric accuracy of surface grids and the ease with which databases may be obtained is being improved by adding support for standard computer-aided design formats (e.g., PATRAN Neutral and IGES files). Finally, volume grid quality was improved through addition of new SOR algorithm features and the new hybrid control function type to GRIDGEN3D.

  15. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  16. Creative use of pilot points to address site and regional scale heterogeneity in a variable-density model

    USGS Publications Warehouse

    Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.

    2010-01-01

    Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.

  17. Guidance, Navigation, and Control Technology Assessment for Future Planetary Science Missions

    NASA Technical Reports Server (NTRS)

    Beauchamp, Pat; Cutts, James; Quadrelli, Marco B.; Wood, Lincoln J.; Riedel, Joseph E.; McHenry, Mike; Aung, MiMi; Cangahuala, Laureano A.; Volpe, Rich

    2013-01-01

    Future planetary explorations envisioned by the National Research Council's (NRC's) report titled Vision and Voyages for Planetary Science in the Decade 2013-2022, developed for NASA Science Mission Directorate (SMD) Planetary Science Division (PSD), seek to reach targets of broad scientific interest across the solar system. This goal requires new capabilities such as innovative interplanetary trajectories, precision landing, operation in close proximity to targets, precision pointing, multiple collaborating spacecraft, multiple target tours, and advanced robotic surface exploration. Advancements in Guidance, Navigation, and Control (GN&C) and Mission Design in the areas of software, algorithm development and sensors will be necessary to accomplish these future missions. This paper summarizes the key GN&C and mission design capabilities and technologies needed for future missions pursuing SMD PSD's scientific goals.

  18. Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy

    2001-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  19. Reconfigurable Flight Control Designs With Application to the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zhenglu

    1999-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the right body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  20. A Web Based Collaborative Design Environment for Spacecraft

    NASA Technical Reports Server (NTRS)

    Dunphy, Julia

    1998-01-01

    In this era of shrinking federal budgets in the USA we need to dramatically improve our efficiency in the spacecraft engineering design process. We have come up with a method which captures much of the experts' expertise in a dataflow design graph: Seamlessly connectable set of local and remote design tools; Seamlessly connectable web based design tools; and Web browser interface to the developing spacecraft design. We have recently completed our first web browser interface and demonstrated its utility in the design of an aeroshell using design tools located at web sites at three NASA facilities. Multiple design engineers and managers are now able to interrogate the design engine simultaneously and find out what the design looks like at any point in the design cycle, what its parameters are, and how it reacts to adverse space environments.

  1. Development of an algorithm to provide awareness in choosing study designs for inclusion in systematic reviews of healthcare interventions: a method study

    PubMed Central

    Peinemann, Frank; Kleijnen, Jos

    2015-01-01

    Objectives To develop an algorithm that aims to provide guidance and awareness for choosing multiple study designs in systematic reviews of healthcare interventions. Design Method study: (1) To summarise the literature base on the topic. (2) To apply the integration of various study types in systematic reviews. (3) To devise decision points and outline a pragmatic decision tree. (4) To check the plausibility of the algorithm by backtracking its pathways in four systematic reviews. Results (1) The results of our systematic review of the published literature have already been published. (2) We recaptured the experience from our four previously conducted systematic reviews that required the integration of various study types. (3) We chose length of follow-up (long, short), frequency of events (rare, frequent) and types of outcome as decision points (death, disease, discomfort, disability, dissatisfaction) and aligned the study design labels according to the Cochrane Handbook. We also considered practical or ethical concerns, and the problem of unavailable high-quality evidence. While applying the algorithm, disease-specific circumstances and aims of interventions should be considered. (4) We confirmed the plausibility of the pathways of the algorithm. Conclusions We propose that the algorithm can assist to bring seminal features of a systematic review with multiple study designs to the attention of anyone who is planning to conduct a systematic review. It aims to increase awareness and we think that it may reduce the time burden on review authors and may contribute to the production of a higher quality review. PMID:26289450

  2. Dynamic reflexivity in action: an armchair walkthrough of a qualitatively driven mixed-method and multiple methods study of mindfulness training in schoolchildren.

    PubMed

    Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio

    2015-06-01

    Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.

  3. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  4. Mixture-based gatekeeping procedures in adaptive clinical trials.

    PubMed

    Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji

    2018-01-01

    Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.

  5. Decoherence-Free Interaction between Giant Atoms in Waveguide Quantum Electrodynamics

    NASA Astrophysics Data System (ADS)

    Kockum, Anton Frisk; Johansson, Göran; Nori, Franco

    2018-04-01

    In quantum-optics experiments with both natural and artificial atoms, the atoms are usually small enough that they can be approximated as pointlike compared to the wavelength of the electromagnetic radiation with which they interact. However, superconducting qubits coupled to a meandering transmission line, or to surface acoustic waves, can realize "giant artificial atoms" that couple to a bosonic field at several points which are wavelengths apart. Here, we study setups with multiple giant atoms coupled at multiple points to a one-dimensional (1D) waveguide. We show that the giant atoms can be protected from decohering through the waveguide, but still have exchange interactions mediated by the waveguide. Unlike in decoherence-free subspaces, here the entire multiatom Hilbert space (2N states for N atoms) is protected from decoherence. This is not possible with "small" atoms. We further show how this decoherence-free interaction can be designed in setups with multiple atoms to implement, e.g., a 1D chain of atoms with nearest-neighbor couplings or a collection of atoms with all-to-all connectivity. This may have important applications in quantum simulation and quantum computing.

  6. Decoherence-Free Interaction between Giant Atoms in Waveguide Quantum Electrodynamics.

    PubMed

    Kockum, Anton Frisk; Johansson, Göran; Nori, Franco

    2018-04-06

    In quantum-optics experiments with both natural and artificial atoms, the atoms are usually small enough that they can be approximated as pointlike compared to the wavelength of the electromagnetic radiation with which they interact. However, superconducting qubits coupled to a meandering transmission line, or to surface acoustic waves, can realize "giant artificial atoms" that couple to a bosonic field at several points which are wavelengths apart. Here, we study setups with multiple giant atoms coupled at multiple points to a one-dimensional (1D) waveguide. We show that the giant atoms can be protected from decohering through the waveguide, but still have exchange interactions mediated by the waveguide. Unlike in decoherence-free subspaces, here the entire multiatom Hilbert space (2^{N} states for N atoms) is protected from decoherence. This is not possible with "small" atoms. We further show how this decoherence-free interaction can be designed in setups with multiple atoms to implement, e.g., a 1D chain of atoms with nearest-neighbor couplings or a collection of atoms with all-to-all connectivity. This may have important applications in quantum simulation and quantum computing.

  7. Transformations in a Civil Discourse Public Speaking Class: Speakers' and Listeners' Attitude Change. Scholarship of Teaching and Learning

    ERIC Educational Resources Information Center

    Gayle, Barbara Mae

    2004-01-01

    Learning to engage in civil discourse requires students to maintain an openness to new points of view and attitude change. In a public speaking course based on principles of civil discourse, classroom procedures were designed to foster subjective reframing by engaging students in the disorienting exercise of supporting multiple perspectives on the…

  8. Computer program determines vibration in three-dimensional space of hydraulic lines excited by forced displacements

    NASA Technical Reports Server (NTRS)

    Dodge, W. G.

    1968-01-01

    Computer program determines the forced vibration in three dimensional space of a multiple degree of freedom beam type structural system. Provision is made for the longitudinal axis of the analytical model to change orientation at any point along its length. This program is used by industries in which structural design dynamic analyses are performed.

  9. Boiling points of halogenated ethanes: an explanatory model implicating weak intermolecular hydrogen-halogen bonding.

    PubMed

    Beauchamp, Guy

    2008-10-23

    This study explores via structural clues the influence of weak intermolecular hydrogen-halogen bonds on the boiling point of halogenated ethanes. The plot of boiling points of 86 halogenated ethanes versus the molar refraction (linked to polarizability) reveals a series of straight lines, each corresponding to one of nine possible arrangements of hydrogen and halogen atoms on the two-carbon skeleton. A multiple linear regression model of the boiling points could be designed based on molar refraction and subgroup structure as independent variables (R(2) = 0.995, standard error of boiling point 4.2 degrees C). The model is discussed in view of the fact that molar refraction can account for approximately 83.0% of the observed variation in boiling point, while 16.5% could be ascribed to weak C-X...H-C intermolecular interactions. The difference in the observed boiling point of molecules having similar molar refraction values but differing in hydrogen-halogen intermolecular bonds can reach as much as 90 degrees C.

  10. P-value interpretation and alpha allocation in clinical trials.

    PubMed

    Moyé, L A

    1998-08-01

    Although much value has been placed on type I error event probabilities in clinical trials, interpretive difficulties often arise that are directly related to clinical trial complexity. Deviations of the trial execution from its protocol, the presence of multiple treatment arms, and the inclusion of multiple end points complicate the interpretation of an experiment's reported alpha level. The purpose of this manuscript is to formulate the discussion of P values (and power for studies showing no significant differences) on the basis of the event whose relative frequency they represent. Experimental discordance (discrepancies between the protocol's directives and the experiment's execution) is linked to difficulty in alpha and beta interpretation. Mild experimental discordance leads to an acceptable adjustment for alpha or beta, while severe discordance results in their corruption. Finally, guidelines are provided for allocating type I error among a collection of end points in a prospectively designed, randomized controlled clinical trial. When considering secondary end point inclusion in clinical trials, investigators should increase the sample size to preserve the type I error rates at acceptable levels.

  11. The Use of Mixed Methods for Therapeutic Massage Research

    PubMed Central

    Porcino, Antony Joseph; Verhoef, Marja J.

    2010-01-01

    Mixed methods research is the integration of quantitative and qualitative components in a research project. Whether you are reading or designing a mixed methods research project, it is important to be familiar with both qualitative and quantitative research methods and the specific purposes for which they are brought together in a study: triangulation, complementarity, expansion, initiation, or development. In addition, decisions need to be made about the sequencing and the priority or importance of each qualitative and quantitative component relative to the other components, and the point or points at which the various qualitative and quantitative components will be integrated. Mixed methods research is increasingly being recognized for its ability to bring multiple points of view to a research project, taking advantage of the strengths of each of the quantitative and qualitative components to explain or resolve complex phenomena or results. This ability becomes critical when complex healing systems such as therapeutic massage are being studied. Complex healing systems may have multiple physiologic effects, often reflected in changes throughout the patient’s body. Additionally, the patient’s experience of the treatment may be an important outcome. PMID:21589698

  12. Robust control of a parallel hybrid drivetrain with a CVT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, T.; Schroeder, D.

    1996-09-01

    In this paper the design of a robust control system for a parallel hybrid drivetrain is presented. The drivetrain is based on a continuously variable transmission (CVT) and is therefore a highly nonlinear multiple-input-multiple-output system (MIMO-System). Input-Output-Linearization offers the possibility of linearizing and of decoupling the system. Since for example the vehicle mass varies with the load and the efficiency of the gearbox depends strongly on the actual working point, an exact linearization of the plant will mostly fail. Therefore a robust control algorithm based on sliding mode is used to control the drivetrain.

  13. Servicing and Deployment of National Resources in Sun-Earth Libration Point Orbits

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Beckman, Mark; Mar, Greg C.; Mesarch, Michael; Cooley, Steven; Leete, Steven J.

    2002-01-01

    Spacecraft travel between the Sun-Earth system, the Earth-Moon system, and beyond has received extensive attention recently. The existence of a connection between unstable regions enables mission designers to envision scenarios of multiple spacecraft traveling cheaply from system to system, rendezvousing, servicing, and refueling along the way. This paper presents examples of transfers between the Sun-Earth and Earth-Moon systems using a true ephemeris and perturbation model. It shows the (Delta)V costs associated with these transfers, including the costs to reach the staging region from the Earth. It explores both impulsive and low thrust transfer trajectories. Additionally, analysis that looks specifically at the use of nuclear power in libration point orbits and the issues associated with them such as inadvertent Earth return is addressed. Statistical analysis of Earth returns and the design of biased orbits to prevent any possible return are discussed. Lastly, the idea of rendezvous between spacecraft in libration point orbits using impulsive maneuvers is addressed.

  14. Software fault-tolerance by design diversity DEDIX: A tool for experiments

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.

  15. Overview: Solar Electric Propulsion Concept Designs for SEP Technology Demonstration Mission

    NASA Technical Reports Server (NTRS)

    Mcguire, Melissa L.; Hack, Kurt J.; Manzella, David; Herman, Daniel

    2014-01-01

    JPC presentation of the Concept designs for NASA Solar Electric Propulsion Technology Demonstration mission paper. Multiple Solar Electric Propulsion Technology Demonstration Missions were developed to assess vehicle performance and estimated mission cost. Concepts ranged from a 10,000 kg spacecraft capable of delivering 4000 kg of payload to one of the Earth Moon Lagrange points in support of future human-crewed outposts to a 180 kg spacecraft capable of performing an asteroid rendezvous mission after launched to a geostationary transfer orbit as a secondary payload.

  16. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Interplanetary Mission Design Using Chemical Propulsion

    NASA Technical Reports Server (NTRS)

    Englander, Jacob; Vavrina, Matthew

    2015-01-01

    The customer (scientist or project manager) most often does not want just one point solution to the mission design problem Instead, an exploration of a multi-objective trade space is required. For a typical main-belt asteroid mission the customer might wish to see the trade-space of: Launch date vs. Flight time vs. Deliverable mass, while varying the destination asteroid, planetary flybys, launch year, etcetera. To address this question we use a multi-objective discrete outer-loop which defines many single objective real-valued inner-loop problems.

  17. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  18. LSST (Hoop/Column) Maypole Antenna Development Program, phase 1, part 1

    NASA Technical Reports Server (NTRS)

    Sullivan, M. R.

    1982-01-01

    The first of a two-phase program was performed to develop the technology necessary to evaluate, design, manufacture, package, transport and deploy the hoop/column deployable antenna reflector by means of a ground based program. The hoop/column concept consists of a cable stiffened large diameter hoop and central column structure that supports and contours a radio frequency reflective mesh surface. Mission scenarios for communications, radiometer and radio astronomy, were studied. The data to establish technology drivers that resulted in a specification of a point design was provided. The point design is a multiple beam quadaperture offset antenna system wich provides four separate offset areas of illumination on a 100 meter diameter symmetrical parent reflector. The periphery of the reflector is a hoop having 48 segments that articulate into a small stowed volume around a center extendable column. The hoop and column are structurally connected by graphite and quartz cables. The prominence of cables in the design resulted in the development of advanced cable technology. Design verification models were built of the hoop, column, and surface stowage subassemblies. Model designs were generated for a half scale sector of the surface and a 1/6 scale of the complete deployable reflector.

  19. Work/control stations in Space Station weightlessness

    NASA Technical Reports Server (NTRS)

    Willits, Charles

    1990-01-01

    An ergonomic integration of controls, displays, and associated interfaces with an operator, whose body geometry and dynamics may be altered by the state of weightlessness, is noted to rank in importance with the optimal positioning of controls relative to the layout and architecture of 'body-ported' work/control stations applicable to the NASA Space Station Freedom. A long-term solution to this complex design problem is envisioned to encompass the following features: multiple imaging, virtual optics, screen displays controlled by a keyboard ergonomically designed for weightlessness, cursor control, a CCTV camera, and a hand-controller featuring 'no-grip' vernier/tactile positioning. This controller frees all fingers for multiple-switch actuations, while retaining index/register determination with the hand controller. A single architectural point attachment/restraint may be used which requires no residual muscle tension in either brief or prolonged operation.

  20. Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.

    PubMed

    Flassig, R J; Sundmacher, K

    2012-12-01

    Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.

  1. Evidence-based point-of-care tests and device designs for disaster preparedness.

    PubMed

    Brock, T Keith; Mecozzi, Daniel M; Sumner, Stephanie; Kost, Gerald J

    2010-01-01

    To define pathogen tests and device specifications needed for emerging point-of-care (POC) technologies used in disasters. Surveys included multiple-choice and ranking questions. Multiple-choice questions were analyzed with the chi2 test for goodness-of-fit and the binomial distribution test. Rankings were scored and compared using analysis of variance and Tukey's multiple comparison test. Disaster care experts on the editorial boards of the American Journal of Disaster Medicine and the Disaster Medicine and Public Health Preparedness, and the readers of the POC Journal. Vibrio cholera and Staphylococcus aureus were top-ranked pathogens for testing in disaster settings. Respondents felt that disaster response teams should be equipped with pandemic infectious disease tests for novel 2009 H1N1 and avian H5N1 influenza (disaster care, p < 0.05; POC, p < 0.01). In disaster settings, respondents preferred self-contained test cassettes (disaster care, p < 0.05; POC, p < 0.001) for direct blood sampling (POC, p < 0.01) and disposal of biological waste (disaster care, p < 0.05; POC, p < 0.001). Multiplex testing performed at the POC was preferred in urgent care and emergency room settings. Evidence-based needs assessment identifies pathogen detection priorities in disaster care scenarios, in which Vibrio cholera, methicillin-sensitive and methicillin-resistant Staphylococcus aureus, and Escherichia coli ranked the highest. POC testing should incorporate setting-specific design criteria such as safe disposable cassettes and direct blood sampling at the site of care.

  2. The Importance and Role of Intracluster Correlations in Planning Cluster Trials

    PubMed Central

    Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark

    2008-01-01

    There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427

  3. How Much Is Too Little to Detect Impacts? A Case Study of a Nuclear Power Plant

    PubMed Central

    Széchy, Maria T. M.; Viana, Mariana S.; Curbelo-Fernandez, Maria P.; Lavrado, Helena P.; Junqueira, Andrea O. R.; Vilanova, Eduardo; Silva, Sérgio H. G.

    2012-01-01

    Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no ‘before’ data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general. PMID:23110117

  4. How much is too little to detect impacts? A case study of a nuclear power plant.

    PubMed

    Mayer-Pinto, Mariana; Ignacio, Barbara L; Széchy, Maria T M; Viana, Mariana S; Curbelo-Fernandez, Maria P; Lavrado, Helena P; Junqueira, Andrea O R; Vilanova, Eduardo; Silva, Sérgio H G

    2012-01-01

    Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no 'before' data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general.

  5. Avelumab (anti-PD-L1) in platinum-resistant/refractory ovarian cancer: JAVELIN Ovarian 200 Phase III study design.

    PubMed

    Pujade-Lauraine, Eric; Fujiwara, Keiichi; Dychter, Samuel S; Devgan, Geeta; Monk, Bradley J

    2018-03-27

    Avelumab is a human anti-PD-L1 checkpoint inhibitor with clinical activity in multiple solid tumors. Here, we describe the rationale and design for JAVELIN Ovarian 200 (NCT02580058), the first randomized Phase III trial to evaluate the role of checkpoint inhibition in women with ovarian cancer. This three-arm trial is comparing avelumab administered alone or in combination with pegylated liposomal doxorubicin versus pegylated liposomal doxorubicin alone in patients with platinum-resistant/refractory recurrent ovarian, fallopian tube or peritoneal cancer. Eligible patients are not preselected based on PD-L1 expression and may have received up to three prior lines of chemotherapy for platinum-sensitive disease, but none for resistant disease. Overall survival and progression-free survival are primary end points, and secondary end points include biomarker evaluations and pharmacokinetics.

  6. Common pitfalls in statistical analysis: The perils of multiple testing

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2016-01-01

    Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478

  7. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  8. Experiment Design for Nonparametric Models Based On Minimizing Bayes Risk: Application to Voriconazole1

    PubMed Central

    Bayard, David S.; Neely, Michael

    2016-01-01

    An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a nonparametric model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher Information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the nonparametric model. Specifically, the problem of identifying an individual from a nonparametric prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient’s behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (Multiple-Model Optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications. PMID:27909942

  9. Experiment design for nonparametric models based on minimizing Bayes Risk: application to voriconazole¹.

    PubMed

    Bayard, David S; Neely, Michael

    2017-04-01

    An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a NP model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the NP model. Specifically, the problem of identifying an individual from a NP prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient's behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (multiple-model optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications.

  10. FNAL central email systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Jack; Lilianstrom, Al; Pasetes, Ray

    2004-10-01

    The FNAL Email System is the primary point of entry for email destined for an employee or user at Fermilab. This centrally supported system is designed for reliability and availability. It uses multiple layers of protection to help ensure that: (1) SPAM messages are tagged properly; (2) All mail is inspected for viruses; and (3) Valid mail gets delivered. This system employs numerous redundant subsystems to accomplish these tasks.

  11. Programmable bio-nano-chip system for saliva diagnostics

    NASA Astrophysics Data System (ADS)

    Christodoulides, Nicolaos; De La Garza, Richard; Simmons, Glennon W.; McRae, Michael P.; Wong, Jorge; Kosten, Thomas R.; Miller, Craig S.; Ebersole, Jeffrey L.; McDevitt, John

    2014-06-01

    This manuscript describes programmable Bio-Nano-Chip (p-BNC) approach that serves as miniaturized assay platform designed for the rapid detection and quantitation of multiple analytes in biological fluids along with the specific applications in salivary diagnostics intended for the point of need (PON). Included here are oral fluid-based tests for local periodontal disease, systemic cardiac disease and multiplexed tests for drugs of abuse.

  12. Design of a Multi-Touch Tabletop for Simulation-Based Training

    DTIC Science & Technology

    2014-06-01

    receive, for example using point and click mouse-based computer interactions to specify the routes that vehicles take as part of a convoy...learning, coordination and support for planning. We first provide background in tabletop interaction in general and survey earlier efforts to use...tremendous progress over the past five years. Touch detection technologies now enable multiple users to interact simultaneously on large areas with

  13. Evaluation of designed ligands by a multiple screening method: Application to glycogen phosphorylase inhibitors constructed with a variety of approaches

    NASA Astrophysics Data System (ADS)

    So, Sung-Sau; Karplus, Martin

    2001-07-01

    Glycogen phosphorylase (GP) is an important enzyme that regulates blood glucose level and a key therapeutic target for the treatment of type II diabetes. In this study, a number of potential GP inhibitors are designed with a variety of computational approaches. They include the applications of MCSS, LUDI and CoMFA to identify additional fragments that can be attached to existing lead molecules; the use of 2D and 3D similarity-based QSAR models (HQSAR and SMGNN) and of the LUDI program to identify novel molecules that may bind to the glucose binding site. The designed ligands are evaluated by a multiple screening method, which is a combination of commercial and in-house ligand-receptor binding affinity prediction programs used in a previous study (So and Karplus, J. Comp.-Aid. Mol. Des., 13 (1999), 243-258). Each method is used at an appropriate point in the screening, as determined by both the accuracy of the calculations and the computational cost. A comparison of the strengths and weaknesses of the ligand design approaches is made.

  14. Computer-assisted 3D kinematic analysis of all leg joints in walking insects.

    PubMed

    Bender, John A; Simpson, Elaine M; Ritzmann, Roy E

    2010-10-26

    High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.

  15. GLRS-R 2-colour retroreflector target design and predicted performance

    NASA Astrophysics Data System (ADS)

    Lund, Glenn

    The retroreflector ground target design for the GLRS-R spaceborne dual wavelength laser ranging system is described. The passive design flows down from the requirements of high station autonomy, high global field of view, little or no multiple pulse returns, and adequate optical cross section for most ranging geometries. The solution makes use of five hollow cube corner retroreflectors of which one points to the zenith and the remaining four are inclined from the vertical at uniform azimuthal spacings. The need for large retroreflectors is expected to generate narrow diffraction lobes. A good compromise solution is found by spoiling just one of the retroereflector dihedral angles from 90 deg, thus generating two symmetrically oriented diffraction lobes in the return beam. The required spoil angles are found to have little dependance on ground target latitude. Various link budget analyses are presented. They show the influence of such factors as point ahead optimization, turbulence, ranging angle, atmospheric visibility, and ground target thermal deformations.

  16. Point spread function engineering for iris recognition system design.

    PubMed

    Ashok, Amit; Neifeld, Mark A

    2010-04-01

    Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.

  17. Accelerated longitudinal designs: An overview of modelling, power, costs and handling missing data.

    PubMed

    Galbraith, Sally; Bowden, Jack; Mander, Adrian

    2017-02-01

    Longitudinal studies are often used to investigate age-related developmental change. Whereas a single cohort design takes a group of individuals at the same initial age and follows them over time, an accelerated longitudinal design takes multiple single cohorts, each one starting at a different age. The main advantage of an accelerated longitudinal design is its ability to span the age range of interest in a shorter period of time than would be possible with a single cohort longitudinal design. This paper considers design issues for accelerated longitudinal studies. A linear mixed effect model is considered to describe the responses over age with random effects for intercept and slope parameters. Random and fixed cohort effects are used to cope with the potential bias accelerated longitudinal designs have due to multiple cohorts. The impact of other factors such as costs and the impact of dropouts on the power of testing or the precision of estimating parameters are examined. As duration-related costs increase relative to recruitment costs the best designs shift towards shorter duration and eventually cross-sectional design being best. For designs with the same duration but differing interval between measurements, we found there was a cutoff point for measurement costs relative to recruitment costs relating to frequency of measurements. Under our model of 30% dropout there was a maximum power loss of 7%.

  18. Point counts from clustered populations: Lessons from an experiment with Hawaiian crows

    USGS Publications Warehouse

    Hayward, G.D.; Kepler, C.B.; Scott, J.M.

    1991-01-01

    We designed an experiment to identify factors contributing most to error in counts of Hawaiian Crow or Alala (Corvus hawaiiensis) groups that are detected aurally. Seven observers failed to detect calling Alala on 197 of 361 3-min point counts on four transects extending from cages with captive Alala. A detection curve describing the relation between frequency of flock detection and distance typified the distribution expected in transect or point counts. Failure to detect calling Alala was affected most by distance, observer, and Alala calling frequency. The number of individual Alala calling was not important in detection rate. Estimates of the number of Alala calling (flock size) were biased and imprecise: average difference between number of Alala calling and number heard was 3.24 (.+-. 0.277). Distance, observer, number of Alala calling, and Alala calling frequency all contributed to errors in estimates of group size (P < 0.0001). Multiple regression suggested that number of Alala calling contributed most to errors. These results suggest that well-designed point counts may be used to estimate the number of Alala flocks but cast doubt on attempts to estimate flock size when individuals are counted aurally.

  19. Development of a Multi-Point Microwave Interferometry (MPMI) Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Specht, Paul Elliott; Cooper, Marcia A.; Jilek, Brook Anton

    2015-09-01

    A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of themore » MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.« less

  20. Pointright: a system to redirect mouse and keyboard control among multiple machines

    DOEpatents

    Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA

    2008-09-30

    The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.

  1. High data rate modem simulation for the space station multiple-access communications system

    NASA Technical Reports Server (NTRS)

    Horan, Stephen

    1987-01-01

    The communications system for the space station will require a space based multiple access component to provide communications between the space based program elements and the station. A study was undertaken to investigate two of the concerns of this multiple access system, namely, the issues related to the frequency spectrum utilization and the possibilities for higher order (than QPSK) modulation schemes for use in possible modulators and demodulators (modems). As a result of the investigation, many key questions about the frequency spectrum utilization were raised. At this point, frequency spectrum utilization is seen as an area requiring further work. Simulations were conducted using a computer aided communications system design package to provide a straw man modem structure to be used for both QPSK and 8-PSK channels.

  2. Design and analysis of a curved cylindrical Fresnel lens that produces high irradiance uniformity on the solar cell.

    PubMed

    González, Juan C

    2009-04-10

    A new type of convex Fresnel lens for linear photovoltaic concentration systems is presented. The lens designed with this method reaches 100% of geometrical optical efficiency, and the ratio (Aperture area)/(Receptor area) is up to 75% of the theoretical limit. The main goal of the design is high uniformity of the radiation on the cell surface for each input angle inside the acceptance. The ratio between the maximum and the minimum irradiance on points of the solar cell is less than 2. The lens has been designed with the simultaneous multiple surfaces (SMS) method of nonimaging optics, and ray tracing techniques have been used to characterize its performance for linear symmetry systems.

  3. Impact of Footprint Diameter and Off-Nadir Pointing on the Precision of Canopy Height Estimates from Spaceborne Lidar

    NASA Technical Reports Server (NTRS)

    Pang, Yong; Lefskky, Michael; Sun, Guoqing; Ranson, Jon

    2011-01-01

    A spaceborne lidar mission could serve multiple scientific purposes including remote sensing of ecosystem structure, carbon storage, terrestrial topography and ice sheet monitoring. The measurement requirements of these different goals will require compromises in sensor design. Footprint diameters that would be larger than optimal for vegetation studies have been proposed. Some spaceborne lidar mission designs include the possibility that a lidar sensor would share a platform with another sensor, which might require off-nadir pointing at angles of up to 16 . To resolve multiple mission goals and sensor requirements, detailed knowledge of the sensitivity of sensor performance to these aspects of mission design is required. This research used a radiative transfer model to investigate the sensitivity of forest height estimates to footprint diameter, off-nadir pointing and their interaction over a range of forest canopy properties. An individual-based forest model was used to simulate stands of mixed conifer forest in the Tahoe National Forest (Northern California, USA) and stands of deciduous forests in the Bartlett Experimental Forest (New Hampshire, USA). Waveforms were simulated for stands generated by a forest succession model using footprint diameters of 20 m to 70 m. Off-nadir angles of 0 to 16 were considered for a 25 m diameter footprint diameter. Footprint diameters in the range of 25 m to 30 m were optimal for estimates of maximum forest height (R(sup 2) of 0.95 and RMSE of 3 m). As expected, the contribution of vegetation height to the vertical extent of the waveform decreased with larger footprints, while the contribution of terrain slope increased. Precision of estimates decreased with an increasing off-nadir pointing angle, but off-nadir pointing had less impact on height estimates in deciduous forests than in coniferous forests. When pointing off-nadir, the decrease in precision was dependent on local incidence angle (the angle between the off-nadir beam and a line normal to the terrain surface) which is dependent on the off-nadir pointing angle, terrain slope, and the difference between the laser pointing azimuth and terrain aspect; the effect was larger when the sensor was aligned with the terrain azimuth but when aspect and azimuth are opposed, there was virtually no effect on R2 or RMSE. A second effect of off-nadir pointing is that the laser beam will intersect individual crowns and the canopy as a whole from a different angle which had a distinct effect on the precision of lidar estimates of height, decreasing R2 and increasing RMSE, although the effect was most pronounced for coniferous crowns.

  4. Estimating vehicle height using homographic projections

    DOEpatents

    Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter

    2013-07-16

    Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.

  5. Using multifield measurements to eliminate alignment degeneracies in the JWST testbed telescope

    NASA Astrophysics Data System (ADS)

    Sabatke, Erin; Acton, Scott; Schwenker, John; Towell, Tim; Carey, Larkin; Shields, Duncan; Contos, Adam; Leviton, Doug

    2007-09-01

    The primary mirror of the James Webb Space Telescope (JWST) consists of 18 segments and is 6.6 meters in diameter. A sequence of commissioning steps is carried out at a single field point to align the segments. At that single field point, though, the segmented primary mirror can compensate for aberrations caused by misalignments of the remaining mirrors. The misalignments can be detected in the wavefronts of off-axis field points. The Multifield (MF) step in the commissioning process surveys five field points and uses a simple matrix multiplication to calculate corrected positions for the secondary and primary mirrors. A demonstration of the Multifield process was carried out on the JWST Testbed Telescope (TBT). The results show that the Multifield algorithm is capable of reducing the field dependency of the TBT to about 20 nm RMS, relative to the TBT design nominal field dependency.

  6. A Methodology to Assess the Capability of Engine Designs to Meet Closed-Loop Performance and Operability Requirements

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Csank, Jeffrey

    2015-01-01

    Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95 percent response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible controller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed-loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76 percent. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76 percent. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.

  7. A Methodology to Assess the Capability of Engine Designs to Meet Closed-loop Performance and Operability Requirements

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Csank, Jeffrey T.

    2015-01-01

    Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95% response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible con- troller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed- loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76%. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76%. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.

  8. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  9. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  10. Noise Identification in a Hot Transonic Jet Using Low-Dimensional Methods

    DTIC Science & Technology

    2008-03-01

    calibration between the nozzle static pressure (transducer) and total pressure ( pitot probe) reveals a nearly linear relationship between the two, exhibiting... rakes of hot-wires. Multi-point correlations of velocity components coupled with assumptions of homogeneity and periodicity in the jet flow flied...axisymmetric incompressible jet at one downstream position using an in-house designed rake of 138 hot-wires. The experiment was then carried out at multiple

  11. Comparison of Multiple Molecular Dynamics Trajectories Calculated for the Drug-Resistant HIV-1 Integrase T66I/M154I Catalytic Domain

    PubMed Central

    Brigo, Alessandro; Lee, Keun Woo; Iurcu Mustata, Gabriela; Briggs, James M.

    2005-01-01

    HIV-1 integrase (IN) is an essential enzyme for the viral replication and an interesting target for the design of new pharmaceuticals for multidrug therapy of AIDS. Single and multiple mutations of IN at residues T66, S153, or M154 confer degrees of resistance to several inhibitors that prevent the enzyme from performing its normal strand transfer activity. Four different conformations of IN were chosen from a prior molecular dynamics (MD) simulation on the modeled IN T66I/M154I catalytic core domain as starting points for additional MD studies. The aim of this article is to understand the dynamic features that may play roles in the catalytic activity of the double mutant enzyme in the absence of any inhibitor. Moreover, we want to verify the influence of using different starting points on the MD trajectories and associated dynamical properties. By comparison of the trajectories obtained from these MD simulations we have demonstrated that the starting point does not affect the conformational space explored by this protein and that the time of the simulation is long enough to achieve convergence for this system. PMID:15764656

  12. CCD correlation techniques

    NASA Technical Reports Server (NTRS)

    Hewes, C. R.; Bosshart, P. W.; Eversole, W. L.; Dewit, M.; Buss, D. D.

    1976-01-01

    Two CCD techniques were discussed for performing an N-point sampled data correlation between an input signal and an electronically programmable reference function. The design and experimental performance of an implementation of the direct time correlator utilizing two analog CCDs and MOS multipliers on a single IC were evaluated. The performance of a CCD implementation of the chirp z transform was described, and the design of a new CCD integrated circuit for performing correlation by multiplication in the frequency domain was presented. This chip provides a discrete Fourier transform (DFT) or inverse DFT, multipliers, and complete support circuitry for the CCD CZT. The two correlation techniques are compared.

  13. Fuel-optimal, low-thrust transfers between libration point orbits

    NASA Astrophysics Data System (ADS)

    Stuart, Jeffrey R.

    Mission design requires the efficient management of spacecraft fuel to reduce mission cost, increase payload mass, and extend mission life. High efficiency, low-thrust propulsion devices potentially offer significant propellant reductions. Periodic orbits that exist in a multi-body regime and low-thrust transfers between these orbits can be applied in many potential mission scenarios, including scientific observation and communications missions as well as cargo transport. In light of the recent discovery of water ice in lunar craters, libration point orbits that support human missions within the Earth-Moon region are of particular interest. This investigation considers orbit transfer trajectories generated by a variable specific impulse, low-thrust engine with a primer-vector-based, fuel-optimizing transfer strategy. A multiple shooting procedure with analytical gradients yields rapid solutions and serves as the basis for an investigation into the trade space between flight time and consumption of fuel mass. Path and performance constraints can be included at node points along any thrust arc. Integration of invariant manifolds into the design strategy may also yield improved performance and greater fuel savings. The resultant transfers offer insight into the performance of the variable specific impulse engine and suggest novel implementations of conventional impulsive thrusters. Transfers incorporating invariant manifolds demonstrate the fuel savings and expand the mission design capabilities that are gained by exploiting system symmetry. A number of design applications are generated.

  14. Phase-shifting point diffraction interferometer mask designs

    DOEpatents

    Goldberg, Kenneth Alan

    2001-01-01

    In a phase-shifting point diffraction interferometer, different image-plane mask designs can improve the operation of the interferometer. By keeping the test beam window of the mask small compared to the separation distance between the beams, the problem of energy from the reference beam leaking through the test beam window is reduced. By rotating the grating and mask 45.degree., only a single one-dimensional translation stage is required for phase-shifting. By keeping two reference pinholes in the same orientation about the test beam window, only a single grating orientation, and thus a single one-dimensional translation stage, is required. The use of a two-dimensional grating allows for a multiplicity of pinholes to be used about the pattern of diffracted orders of the grating at the mask. Orientation marks on the mask can be used to orient the device and indicate the position of the reference pinholes.

  15. High-speed spatial scanning pyrometer

    NASA Technical Reports Server (NTRS)

    Cezairliyan, A.; Chang, R. F.; Foley, G. M.; Miller, A. P.

    1993-01-01

    A high-speed spatial scanning pyrometer has been designed and developed to measure spectral radiance temperatures at multiple target points along the length of a rapidly heating/cooling specimen in dynamic thermophysical experiments at high temperatures (above about 1800 K). The design, which is based on a self-scanning linear silicon array containing 1024 elements, enables the pyrometer to measure spectral radiance temperatures (nominally at 650 nm) at 1024 equally spaced points along a 25-mm target length. The elements of the array are sampled consecutively every 1 microsec, thereby permitting one cycle of measurements to be completed in approximately 1 msec. Procedures for calibration and temperature measurement as well as the characteristics and performance of the pyrometer are described. The details of sources and estimated magnitudes of possible errors are given. An example of measurements of radiance temperatures along the length of a tungsten rod, during its cooling following rapid resistive pulse heating, is presented.

  16. A Common Probe Design for Multiple Planetary Destinations

    NASA Technical Reports Server (NTRS)

    Hwang, H. H.; Allen, G. A., Jr.; Alunni, A. I.; Amato, M. J.; Atkinson, D. H.; Bienstock, B. J.; Cruz, J. R.; Dillman, R. A.; Cianciolo, A. D.; Elliott, J. O.; hide

    2018-01-01

    Atmospheric probes have been successfully flown to planets and moons in the solar system to conduct in situ measurements. They include the Pioneer Venus multi-probes, the Galileo Jupiter probe, and Huygens probe. Probe mission concepts to five destinations, including Venus, Jupiter, Saturn, Uranus, and Neptune, have all utilized similar-shaped aeroshells and concept of operations, namely a 45-degree sphere cone shape with high density heatshield material and parachute system for extracting the descent vehicle from the aeroshell. Each concept designed its probe to meet specific mission requirements and to optimize mass, volume, and cost. At the 2017 International Planetary Probe Workshop (IPPW), NASA Headquarters postulated that a common aeroshell design could be used successfully for multiple destinations and missions. This "common probe"� design could even be assembled with multiple copies, properly stored, and made available for future NASA missions, potentially realizing savings in cost and schedule and reducing the risk of losing technologies and skills difficult to sustain over decades. Thus the NASA Planetary Science Division funded a study to investigate whether a common probe design could meet most, if not all, mission needs to the five planetary destinations with extreme entry environments. The Common Probe study involved four NASA Centers and addressed these issues, including constraints and inefficiencies that occur in specifying a common design. Study methodology: First, a notional payload of instruments for each destination was defined based on priority measurements from the Planetary Science Decadal Survey. Steep and shallow entry flight path angles (EFPA) were defined for each planet based on qualification and operational g-load limits for current, state-of-the-art instruments. Interplanetary trajectories were then identified for a bounding range of EFPA. Next, 3-degrees-of-freedom simulations for entry trajectories were run using the entry state vectors from the interplanetary trajectories. Aeroheating correlations were used to generate stagnation point convective and radiative heat flux profiles for several aeroshell shapes and entry masses. High fidelity thermal response models for various Thermal Protection System (TPS) materials were used to size stagnation-point thicknesses, with margins based on previous studies. Backshell TPS masses were assumed based on scaled heat fluxes from the heatshield and also from previous mission concepts. Presentation: We will present an overview of the study scope, highlights of the trade studies and design driver analyses, and the final recommendations of a common probe design and assembly. We will also indicate limitations that the common probe design may have for the different destinations. Finally, recommended qualification approaches for missions will be presented.

  17. Planetary Crater Detection and Registration Using Marked Point Processes, Multiple Birth and Death Algorithms, and Region-Based Analysis

    NASA Technical Reports Server (NTRS)

    Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.

    2017-01-01

    Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.

  18. SPHERES as Formation Flight Algorithm Development and Validation Testbed: Current Progress and Beyond

    NASA Technical Reports Server (NTRS)

    Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.

    2004-01-01

    The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.

  19. Effects of quantum well growth temperature on the recombination efficiency of InGaN/GaN multiple quantum wells that emit in the green and blue spectral regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammersley, S.; Dawson, P.; Kappers, M. J.

    2015-09-28

    InGaN-based light emitting diodes and multiple quantum wells designed to emit in the green spectral region exhibit, in general, lower internal quantum efficiencies than their blue-emitting counter parts, a phenomenon referred to as the “green gap.” One of the main differences between green-emitting and blue-emitting samples is that the quantum well growth temperature is lower for structures designed to emit at longer wavelengths, in order to reduce the effects of In desorption. In this paper, we report on the impact of the quantum well growth temperature on the optical properties of InGaN/GaN multiple quantum wells designed to emit at 460 nmmore » and 530 nm. It was found that for both sets of samples increasing the temperature at which the InGaN quantum well was grown, while maintaining the same indium composition, led to an increase in the internal quantum efficiency measured at 300 K. These increases in internal quantum efficiency are shown to be due reductions in the non-radiative recombination rate which we attribute to reductions in point defect incorporation.« less

  20. Data Base Design with GIS in Ecosystem Based Multiple Use Forest Management in Artvin, Turkey: A Case Study in Balcı Forest Management Planning Unit.

    PubMed

    Yolasığmaz, Hacı Ahmet; Keleş, Sedat

    2009-01-01

    In Turkey, the understanding of planning focused on timber production has given its place on Multiple Use Management (MUM). Because the whole infrastructure of forestry with inventory system leading the way depends on timber production, some cases of bottle neck are expected during the transition period. Database design, probably the most important stage during the transition to MUM, together with the digital basic maps making up the basis of this infrastructure constitute the main point of this article. Firstly, the forest management philosophy of Turkey in the past was shortly touched upon in the article. Ecosystem Based Multiple Use Forest Management (EBMUFM) approaches was briefly introduced. The second stage of the process of EBMUFM, database design was described by examining the classical planning infrastructure and the coverage to be produced and consumed were suggested in the form of lists. At the application stage, two different geographical databases were established with GIS in Balcı Planning Unit of the years 1984 and 2006. Following that the related basic maps are produced. Timely diversity of the planning unit of 20 years is put forward comparatively with regard to the stand parameters such as tree types, age class, development stage, canopy closure, mixture, volume and increment.

  1. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing.

    PubMed

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-07-17

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system.

  2. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing

    PubMed Central

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-01-01

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system. PMID:26193281

  3. Electronic Imaging

    DTIC Science & Technology

    1991-11-01

    Tilted Rough Disc," Donald J. Schertler and Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George...Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse Bryan J. Stossel Responses Nicholas George z 0 zw V) w LU 0...number of impulses present in the degradation. IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSESt Bryan J. Stossel Nicholas George Institute of Optics

  4. Making Sense of Sensemaking: Requirements of a Cognitive Analysis to Support C2 Decision Support System Design

    DTIC Science & Technology

    2006-06-01

    heart of a distinction within the CSE community with respect to the differences between Cognitive Task Analysis (CTA) and Cognitive Work Analysis...Wesley. Pirolli, P. and Card, S. (2005). The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis . In...D. D., and Elm, W. C. (2000). Cognitive task analysis as bootstrapping multiple converging techniques. In Schraagen, Chipman, and Shalin (Eds

  5. Fractal Point Process and Queueing Theory and Application to Communication Networks

    DTIC Science & Technology

    1999-12-31

    use of nonlinear dynamics and chaos in the design of innovative analog error-protection codes for com- munications applications. In the chaos...the fol- lowing theses, patent, and papers. 1. A. Narula, M. D. Trott , and G. W. Wornell, "Information-Theoretic Analysis of Multiple-Antenna...Bounds," in Proc. Int. Conf. Dec. Control, (Japan), Dec. 1996. 5. G. W. Wornell and M. D. Trott , "Efficient Signal Processing Tech- niques for

  6. System and method for design and optimization of grid connected photovoltaic power plant with multiple photovoltaic module technologies

    DOEpatents

    Thomas, Bex George; Elasser, Ahmed; Bollapragada, Srinivas; Galbraith, Anthony William; Agamy, Mohammed; Garifullin, Maxim Valeryevich

    2016-03-29

    A system and method of using one or more DC-DC/DC-AC converters and/or alternative devices allows strings of multiple module technologies to coexist within the same PV power plant. A computing (optimization) framework estimates the percentage allocation of PV power plant capacity to selected PV module technologies. The framework and its supporting components considers irradiation, temperature, spectral profiles, cost and other practical constraints to achieve the lowest levelized cost of electricity, maximum output and minimum system cost. The system and method can function using any device enabling distributed maximum power point tracking at the module, string or combiner level.

  7. Target tracking and pointing for arrays of phase-locked lasers

    NASA Astrophysics Data System (ADS)

    Macasaet, Van P.; Hughes, Gary B.; Lubin, Philip; Madajian, Jonathan; Zhang, Qicheng; Griswold, Janelle; Kulkarni, Neeraj; Cohen, Alexander; Brashears, Travis

    2016-09-01

    Arrays of phase-locked lasers are envisioned for planetary defense and exploration systems. High-energy beams focused on a threatening asteroid evaporate surface material, creating a reactionary thrust that alters the asteroid's orbit. The same system could be used to probe an asteroid's composition, to search for unknown asteroids, and to propel interplanetary and interstellar spacecraft. Phased-array designs are capable of producing high beam intensity, and allow beam steering and beam profile manipulation. Modular designs allow ongoing addition of emitter elements to a growing array. This paper discusses pointing control for extensible laser arrays. Rough pointing is determined by spacecraft attitude control. Lateral movement of the laser emitter tips behind the optical elements provides intermediate pointing adjustment for individual array elements and beam steering. Precision beam steering and beam formation is accomplished by coordinated phase modulation across the array. Added cells are incorporated into the phase control scheme by precise alignment to local mechanical datums using fast, optical relative position sensors. Infrared target sensors are also positioned within the datum scheme, and provide information about the target vector relative to datum coordinates at each emitter. Multiple target sensors allow refined determination of the target normal plane, providing information to the phase controller for each emitter. As emitters and sensors are added, local position data allows accurate prediction of the relative global position of emitters across the array, providing additional constraints to the phase controllers. Mechanical design and associated phase control that is scalable for target distance and number of emitters is presented.

  8. Statistical approaches for the determination of cut points in anti-drug antibody bioassays.

    PubMed

    Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A

    2015-03-01

    Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Pyviko: an automated Python tool to design gene knockouts in complex viruses with overlapping genes.

    PubMed

    Taylor, Louis J; Strebel, Klaus

    2017-01-07

    Gene knockouts are a common tool used to study gene function in various organisms. However, designing gene knockouts is complicated in viruses, which frequently contain sequences that code for multiple overlapping genes. Designing mutants that can be traced by the creation of new or elimination of existing restriction sites further compounds the difficulty in experimental design of knockouts of overlapping genes. While software is available to rapidly identify restriction sites in a given nucleotide sequence, no existing software addresses experimental design of mutations involving multiple overlapping amino acid sequences in generating gene knockouts. Pyviko performed well on a test set of over 240,000 gene pairs collected from viral genomes deposited in the National Center for Biotechnology Information Nucleotide database, identifying a point mutation which added a premature stop codon within the first 20 codons of the target gene in 93.2% of all tested gene-overprinted gene pairs. This shows that Pyviko can be used successfully in a wide variety of contexts to facilitate the molecular cloning and study of viral overprinted genes. Pyviko is an extensible and intuitive Python tool for designing knockouts of overlapping genes. Freely available as both a Python package and a web-based interface ( http://louiejtaylor.github.io/pyViKO/ ), Pyviko simplifies the experimental design of gene knockouts in complex viruses with overlapping genes.

  10. A multiple pointing-mount control strategy for space platforms

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1992-01-01

    A new disturbance-adaptive control strategy for multiple pointing-mount space platforms is proposed and illustrated by consideration of a simplified 3-link dynamic model of a multiple pointing-mount space platform. Simulation results demonstrate the effectiveness of the new platform control strategy. The simulation results also reveal a system 'destabilization phenomena' that can occur if the set of individual platform-mounted experiment controllers are 'too responsive.'

  11. Effectiveness of an audience response system in teaching pharmacology to baccalaureate nursing students.

    PubMed

    Vana, Kimberly D; Silva, Graciela E; Muzyka, Diann; Hirani, Lorraine M

    2011-06-01

    It has been proposed that students' use of an audience response system, commonly called clickers, may promote comprehension and retention of didactic material. Whether this method actually improves students' grades, however, is still not determined. The purpose of this study was to evaluate whether a lecture format utilizing multiple-choice PowerPoint slides and an audience response system was more effective than a lecture format using only multiple-choice PowerPoint slides in the comprehension and retention of pharmacological knowledge in baccalaureate nursing students. The study also assessed whether the additional use of clickers positively affected students' satisfaction with their learning. Results from 78 students who attended lecture classes with multiple-choice PowerPoint slides plus clickers were compared with those of 55 students who utilized multiple-choice PowerPoint slides only. Test scores between these two groups were not significantly different. A satisfaction questionnaire showed that 72.2% of the control students did not desire the opportunity to use clickers. Of the group utilizing the clickers, 92.3% recommend the use of this system in future courses. The use of multiple-choice PowerPoint slides and an audience response system did not seem to improve the students' comprehension or retention of pharmacological knowledge as compared with those who used solely multiple-choice PowerPoint slides.

  12. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directlymore » applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO 3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO 3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO 3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer-layer glasses. The experimental design was completed by a center-point glass, a Vitreous State Laboratory glass, and replicates of the center point and Vitreous State Laboratory glasses.« less

  13. Doppler centroid estimation ambiguity for synthetic aperture radars

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1989-01-01

    A technique for estimation of the Doppler centroid of an SAR in the presence of large uncertainty in antenna boresight pointing is described. Also investigated is the image degradation resulting from data processing that uses an ambiguous centroid. Two approaches for resolving ambiguities in Doppler centroid estimation (DCE) are presented: the range cross-correlation technique and the multiple-PRF (pulse repetition frequency) technique. Because other design factors control the PRF selection for SAR, a generalized algorithm is derived for PRFs not containing a common divisor. An example using the SIR-C parameters illustrates that this algorithm is capable of resolving the C-band DCE ambiguities for antenna pointing uncertainties of about 2-3 deg.

  14. Quantitative analysis of single- vs. multiple-set programs in resistance training.

    PubMed

    Wolfe, Brian L; LeMura, Linda M; Cole, Phillip J

    2004-02-01

    The purpose of this study was to examine the existing research on single-set vs. multiple-set resistance training programs. Using the meta-analytic approach, we included studies that met the following criteria in our analysis: (a) at least 6 subjects per group; (b) subject groups consisting of single-set vs. multiple-set resistance training programs; (c) pretest and posttest strength measures; (d) training programs of 6 weeks or more; (e) apparently "healthy" individuals free from orthopedic limitations; and (f) published studies in English-language journals only. Sixteen studies generated 103 effect sizes (ESs) based on a total of 621 subjects, ranging in age from 15-71 years. Across all designs, intervention strategies, and categories, the pretest to posttest ES in muscular strength was (chi = 1.4 +/- 1.4; 95% confidence interval, 0.41-3.8; p < 0.001). The results of 2 x 2 analysis of variance revealed simple main effects for age, training status (trained vs. untrained), and research design (p < 0.001). No significant main effects were found for sex, program duration, and set end point. Significant interactions were found for training status and program duration (6-16 weeks vs. 17-40 weeks) and number of sets performed (single vs. multiple). The data indicated that trained individuals performing multiple sets generated significantly greater increases in strength (p < 0.001). For programs with an extended duration, multiple sets were superior to single sets (p < 0.05). This quantitative review indicates that single-set programs for an initial short training period in untrained individuals result in similar strength gains as multiple-set programs. However, as progression occurs and higher gains are desired, multiple-set programs are more effective.

  15. Assisting People with Developmental Disabilities Improve Their Collaborative Pointing Efficiency with a Multiple Cursor Automatic Pointing Assistive Program

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Cheng, Hsiao-Fen; Li, Chia-Chun; Shih, Ching-Tien; Chiang, Ming-Shan

    2010-01-01

    This study evaluated whether four persons (two groups) with developmental disabilities would be able to improve their collaborative pointing performance through a Multiple Cursor Automatic Pointing Assistive Program (MCAPAP) with a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and is able to…

  16. Implementing Target Value Design.

    PubMed

    Alves, Thais da C L; Lichtig, Will; Rybkowski, Zofia K

    2017-04-01

    An alternative to the traditional way of designing projects is the process of target value design (TVD), which takes different departure points to start the design process. The TVD process starts with the client defining an allowable cost that needs to be met by the design and construction teams. An expected cost in the TVD process is defined through multiple interactions between multiple stakeholders who define wishes and others who define ways of achieving these wishes. Finally, a target cost is defined based on the expected profit the design and construction teams are expecting to make. TVD follows a series of continuous improvement efforts aimed at reaching the desired goals for the project and its associated target value cost. The process takes advantage of rapid cycles of suggestions, analyses, and implementation that starts with the definition of value for the client. In the traditional design process, the goal is to identify user preferences and find solutions that meet the needs of the client's expressed preferences. In the lean design process, the goal is to educate users about their values and advocate for a better facility over the long run; this way owners can help contractors and designers to identify better solutions. This article aims to inform the healthcare community about tools and techniques commonly used during the TVD process and how they can be used to educate and support project participants in developing better solutions to meet their needs now as well as in the future.

  17. An Adaptive Dynamic Pointing Assistance Program to Help People with Multiple Disabilities Improve Their Computer Pointing Efficiency with Hand Swing through a Standard Mouse

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Ching-Tien; Wu, Hsiao-Ling

    2010-01-01

    The latest research adopted software technology to redesign the mouse driver, and turned a mouse into a useful pointing assistive device for people with multiple disabilities who cannot easily or possibly use a standard mouse, to improve their pointing performance through a new operation method, Extended Dynamic Pointing Assistive Program (EDPAP),…

  18. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS can process roughly four 5 MPixel stereo frames per minute (on a consumer i7 CPU) to produce a sequence of outlier-free point clouds with more than 3 million points each. Finally, it comes with an easy to use user interface and designed to be scalable on multiple parallel CPUs.

  19. Coexistence and local μ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2016-12-01

    In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Design and Testing of a Variable Pressure Regulator for the Constellation Space Suit

    NASA Technical Reports Server (NTRS)

    Gill, Larry; Campbell, Colin

    2008-01-01

    The next generation space suit requires additional capabilities for controlling and adjusting internal pressure than previous design suits. Next generation suit pressures will range from slight pressure, for astronaut prebreath comfort, to hyperbaric pressure levels for emergency medical treatment. Carleton was awarded a contract in 2008 to design and build a proof of concept bench top demonstrator regulator having five setpoints which are selectable using input electronic signaling. Although the basic regulator architecture is very similar to the existing SOP regulator used in the current EMU, the major difference is the electrical selectivity of multiple setpoints rather than the mechanical On/Off feature found on the SOP regulator. The concept regulator employs a linear actuator stepper motor combination to provide variable compression to a custom design main regulator spring. This concept allows for a continuously adjustable outlet pressures from 8.2 psid (maximum) down to "firm" zero thus effectively allowing it to serve as a shutoff valve. This paper details the regulator design and presents test results on regulation band width, command set point accuracy; slue rate and regulation stability, particularly when the set point is being slued. Projections for a flight configuration version are also offered for performance, architectural layout and weight.

  1. SEL2 servicing: increased science return via on-orbit propellant replenishment

    NASA Astrophysics Data System (ADS)

    Reed, Benjamin B.; DeWeese, Keith; Kienlen, Michael; Aranyos, Thomas; Pellegrino, Joseph; Bacon, Charles; Qureshi, Atif

    2016-07-01

    Spacecraft designers are driving observatories to the distant Sun-Earth Lagrange Point 2 (SEL2) to meet ever-increasing science requirements. The mass fraction dedicated to propellant for these observatories to reach and operate at SEL2 will be allocated with the upmost care, as it comes at the expense of optics and instrument masses. As such, these observatories could benefit from on-orbit refueling, allowing greater dry-to-wet mass ratio at launch and/or longer mission life. NASA is developing technologies, capabilities and integrated mission designs for multiple servicing applications in low Earth orbit (LEO), geosynchronous Earth orbit (GEO) and cisluner locations. Restore-L, a mission officially in formulation, will launch a free-flying robotic servicer to refuel a government-owned satellite in LEO by mid 2020. This paper will detail the results of a point design mission study to extend Restore-L servicing technologies from LEO to SEL2. This SEL2 mission would launch an autonomous, robotic servicer spacecraft equipped to extend the life of two space assets through refueling. Two space platforms were chosen to 1) drive the requirements for achieving SEL2 orbit and rendezvous with a spacecraft, and 2) to drive the requirements to translate within SEL2 to conduct a follow-on servicing mission. Two fuels, xenon and hydrazine, were selected to assess a multiple delivery system. This paper will address key mission drivers, such as servicer autonomy (necessitated due to communications latency at L2). Also discussed will be the value of adding cooperative servicing elements to the client observatories to reduce mission risk.

  2. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is decremented and the process repeats. Eventually (and hopefully), a near-globally optimal solution is attained as T approaches zero. Several exciting variants of SA have recently emerged, including Discrete-State Simulated Annealing (DSSA) and Simulated Tempering (ST). The DSSA algorithm takes the thermodynamic analogy one step further by categorizing objective function evaluations into discrete states. In doing so, many of the case-specific problems associated with fine-tuning the SA algorithm can be avoided; for example, theoretical approximations for the initial and final temperature can be derived independently of the case. In this manner, DSSA provides a scheme that is more robust with respect to widely differing design surfaces. ST differs from SA in that the temperature T becomes an additional random variable in the optimization. The system is also kept in equilibrium as the temperature changes, as opposed to the system being driven out of equilibrium as temperature changes in SA. ST is designed to overcome obstacles in design surfaces where numerous local minima are separated by high barriers. These algorithms are incorporated into the optimal design of the traveling-wave tube amplifier (TWTA). The area under scrutiny is the collector, in which it would be ideal to use negative potential to decelerate the spent electron beam to zero kinetic energy just as it reaches the collector surface. In reality this is not plausible due to a number of physical limitations, including repulsion and differing levels of kinetic energy among individual electrons. Instead, the collector is designed with multiple stages depressed below ground potential. The design of this multiple-stage collector is the optimization problem of interest. One remaining problem in SA and DSSA is the difficulty in determining when equilibrium has been reached so that the current Markov chain can be terminated. It has been suggested in recent literature that simulating the thermodynamic properties opecific heat, entropy, and internal energy from the Boltzmann distribution can provide good indicators of having reached equilibrium at a certain temperature. These properties are tested for their efficacy and implemented in SA and DSSA code with respect to TWTA collector optimization.

  3. Modeling radiative transfer with the doubling and adding approach in a climate GCM setting

    NASA Astrophysics Data System (ADS)

    Lacis, A. A.

    2017-12-01

    The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.

  4. Multi-static networked 3D ladar for surveillance and access control

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ogirala, S. S. R.; Hu, B.; Le, Han Q.

    2007-04-01

    A theoretical design and simulation of a 3D ladar system concept for surveillance, intrusion detection, and access control is described. It is a non-conventional system architecture that consists of: i) multi-static configuration with an arbitrarily scalable number of transmitters (Tx's) and receivers (Rx's) that form an optical wireless code-division-multiple-access (CDMA) network, and ii) flexible system architecture with modular plug-and-play components that can be deployed for any facility with arbitrary topology. Affordability is a driving consideration; and a key feature for low cost is an asymmetric use of many inexpensive Rx's in conjunction with fewer Tx's, which are generally more expensive. The Rx's are spatially distributed close to the surveyed area for large coverage, and capable of receiving signals from multiple Tx's with moderate laser power. The system produces sensing information that scales as NxM, where N, M are the number of Tx's and Rx's, as opposed to linear scaling ~N in non-network system. Also, for target positioning, besides laser pointing direction and time-of-flight, the algorithm includes multiple point-of-view image fusion and triangulation for enhanced accuracy, which is not applicable to non-networked monostatic ladars. Simulation and scaled model experiments on some aspects of this concept are discussed.

  5. Combined central and peripheral stimulation to facilitate motor recovery after stroke: the effect of number of sessions on outcome.

    PubMed

    Lindenberg, Robert; Zhu, Lin L; Schlaug, Gottfried

    2012-06-01

    Proof-of-principle studies have demonstrated transient beneficial effects of transcranial direct current stimulation (tDCS) on motor function in stroke patients, mostly after single treatment sessions. To assess the efficacy of multiple treatment sessions on motor outcome. The authors examined the effects of two 5-day intervention periods of bihemispheric tDCS and simultaneous occupational/physical therapy on motor function in a group of 10 chronic stroke patients. The first 5-day period yielded an increase in Upper-Extremity Fugl-Meyer (UE-FM) scores by 5.9 ± 2.4 points (16.6% ± 10.6%). The second 5-day period resulted in further meaningful, although significantly lower, gains with an additional improvement of 2.3 ± 1.4 points in UE-FM compared with the end of the first 5-day period (5.5% ± 4.2%). The overall mean change after the 2 periods was 8.2 ± 2.2 points (22.9% ± 11.4%). The results confirm the efficacy of bihemispheric tDCS in combination with peripheral sensorimotor stimulation. Furthermore, they demonstrate that the effects of multiple treatment sessions in chronic stroke patients may not necessarily lead to a linear response function, which is of relevance for the design of experimental neurorehabilitation trials.

  6. Motion and force control of multiple robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Kreutz-Delgado, Kenneth

    1992-01-01

    This paper addresses the motion and force control problem of multiple robot arms manipulating a cooperatively held object. A general control paradigm is introduced which decouples the motion and force control problems. For motion control, different control strategies are constructed based on the variables used as the control input in the controller design. There are three natural choices; acceleration of a generalized coordinate, arm tip force vectors, and the joint torques. The first two choices require full model information but produce simple models for the control design problem. The last choice results in a class of relatively model independent control laws by exploiting the Hamiltonian structure of the open loop system. The motion control only determines the joint torque to within a manifold, due to the multiple-arm kinematic constraint. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, an optimization can be performed to best allocate the desired and effector control force to the joint actuators. The other possibility is to control the internal force about some set point. It is shown that effective force regulation can be achieved even if little model information is available.

  7. Computational circular dichroism estimation for point-of-care diagnostics via vortex half-wave retarders

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Tran, Megan Y.; Wong, Alexander

    2018-02-01

    Observing the circular dichroism (CD) caused by organic molecules in biological fluids can provide powerful indicators of patient health and provide diagnostic clues for treatment. Methods for this kind of analysis involve tabletop devices that weigh tens of kilograms with costs on the order of tens of thousands of dollars, making them prohibitive in point-of-care diagnostic applications. In an e ort to reduce the size, cost, and complexity of CD estimation systems for point-of-care diagnostics, we propose a novel method for CD estimation that leverages a vortex half-wave retarder in between two linear polarizers and a two-dimensional photodetector array to provide an overall complexity reduction in the system. This enables the measurement of polarization variations across multiple polarizations after they interact with a biological sample, simultaneously, without the need for mechanical actuation. We further discuss design considerations of this methodology in the context of practical applications to point-of-care diagnostics.

  8. Programmable growth of branched silicon nanowires using a focused ion beam.

    PubMed

    Jun, Kimin; Jacobson, Joseph M

    2010-08-11

    Although significant progress has been made in being able to spatially define the position of material layers in vapor-liquid-solid (VLS) grown nanowires, less work has been carried out in deterministically defining the positions of nanowire branching points to facilitate more complicated structures beyond simple 1D wires. Work to date has focused on the growth of randomly branched nanowire structures. Here we develop a means for programmably designating nanowire branching points by means of focused ion beam-defined VLS catalytic points. This technique is repeatable without losing fidelity allowing multiple rounds of branching point definition followed by branch growth resulting in complex structures. The single crystal nature of this approach allows us to describe resulting structures with linear combinations of base vectors in three-dimensional (3D) space. Finally, by etching the resulting 3D defined wire structures branched nanotubes were fabricated with interconnected nanochannels inside. We believe that the techniques developed here should comprise a useful tool for extending linear VLS nanowire growth to generalized 3D wire structures.

  9. Time-Series Analysis: Assessing the Effects of Multiple Educational Interventions in a Small-Enrollment Course

    NASA Astrophysics Data System (ADS)

    Warren, Aaron R.

    2009-11-01

    Time-series designs are an alternative to pretest-posttest methods that are able to identify and measure the impacts of multiple educational interventions, even for small student populations. Here, we use an instrument employing standard multiple-choice conceptual questions to collect data from students at regular intervals. The questions are modified by asking students to distribute 100 Confidence Points among the options in order to indicate the perceived likelihood of each answer option being the correct one. Tracking the class-averaged ratings for each option produces a set of time-series. ARIMA (autoregressive integrated moving average) analysis is then used to test for, and measure, changes in each series. In particular, it is possible to discern which educational interventions produce significant changes in class performance. Cluster analysis can also identify groups of students whose ratings evolve in similar ways. A brief overview of our methods and an example are presented.

  10. Design and use of multiple blade slurry sawing in a production atmosphere

    NASA Technical Reports Server (NTRS)

    Lynah, F. P., Jr.; Ross, J. B.

    1982-01-01

    The technique and uses of the multiple blade slurry (MBS) saw are considered. Multiple bands of steel are arranged in a frame and the frame is reciprocated with the steel bands to a workpiece, while simultaneously applying abrasive at the point of contact. The blades wear slots in the workpiece and progress through the piece resulting in several parts of wafers. The transition to MBA from diamond slicing is justified by savings resulting from minimized kerf losses, minimized subsurface damage, and improved surface quality off the saw. This allows wafering much closer to finished thickness specifications. The current state of the art MBS technology must be significantly improved if the low cost solar array (LSA) goals are to be attained. It is concluded that although MBS will never be the answer to every wafering requirement, the economical production of wafers to LSA project specifications will be achieved.

  11. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  12. Development of a Whole-Body Haptic Sensor with Multiple Supporting Points and Its Application to a Manipulator

    NASA Astrophysics Data System (ADS)

    Hanyu, Ryosuke; Tsuji, Toshiaki

    This paper proposes a whole-body haptic sensing system that has multiple supporting points between the body frame and the end-effector. The system consists of an end-effector and multiple force sensors. Using this mechanism, the position of a contact force on the surface can be calculated without any sensor array. A haptic sensing system with a single supporting point structure has previously been developed by the present authors. However, the system has drawbacks such as low stiffness and low strength. Therefore, in this study, a mechanism with multiple supporting points was proposed and its performance was verified. In this paper, the basic concept of the mechanism is first introduced. Next, an evaluation of the proposed method, performed by conducting some experiments, is presented.

  13. The current practice of using multiple representations in year 4 science classrooms

    NASA Astrophysics Data System (ADS)

    Chuenmanee, Chanoknat; Thathong, Kongsak

    2018-01-01

    Multiple representations have been widely used as a reasoning tool for understanding complex scientific concepts. Thus this study attempted to investigate the current practice of using multiple representations on Year 4 science classrooms in terms of modes and levels which appear in curriculum documents, teaching plans, tasks and assessments, teaching practices, and students' behaviors. Indeed, documentary analysis, classroom observation, and interview were used as the data collection methods. First of all, Year 4 science documents were analyzed. Then classroom observation was used as a collecting method to seek what actually happen in the classroom. Finally, in-depth interviews were used to gather more information and obtain meaningful data. The finding reveals that many modes of verbal, visual, and tactile representations within three levels of representations are posed in Year 4 documents. Moreover, according to classroom observations and interviews, there are three main points of applying multiple representations into classrooms. First of all, various modes of representations were used, however, a huge number of them did not come together with the levels. The levels of representations, secondly, macroscopic and cellular levels were introduced into all classrooms while symbolic level was provided only in some classrooms. Finally, the connection of modes and levels pointed out that modes of representations were used without the considerations on the levels of them. So, it seems to be that teaching practice did not meet the aims of curriculum. Therefore, these issues were being considered in order to organize and design the further science lessons.

  14. Cues-pause-point language training: teaching echolalics functional use of their verbal labeling repertoires.

    PubMed Central

    McMorrow, M J; Foxx, R M; Faw, G D; Bittle, R G

    1987-01-01

    We evaluated the direct and generalized effects of cues-pause-point language training procedures on immediate echolalia and correct responding in two severely retarded females. Two experiments were conducted with each subject in which the overall goal was to encourage them to remain quiet before, during, and briefly after the presentation of questions and then to verbalize on the basis of environmental cues whose labels represented the correct responses. Multiple baseline designs across question/response pairs (Experiment I) or question/response pairs and settings (Experiment II) demonstrated that echolalia was rapidly replaced by correct responding on the trained stimuli. More importantly, there were clear improvements in subjects' responding to untrained stimuli. Results demonstrated that the cues-pause-point procedures can be effective in teaching severely retarded or echolalic individuals functional use of their verbal labeling repertoires. PMID:3583962

  15. Evaluation of selective vs. point-source perforating for hydraulic fracturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Underwood, P.J.; Kerley, L.

    1996-12-31

    This paper is a case history comparing and evaluating the effects of fracturing the Reef Ridge Diatomite formation in the Midway-Sunset Field, Kern County, California, using {open_quotes}select-fire{close_quotes} and {open_quotes}point-source{close_quotes} perforating completions. A description of the reservoir, production history, and fracturing techniques used leading up to this study is presented. Fracturing treatment analysis and production history matching were used to evaluate the reservoir and fracturing parameters for both completion types. The work showed that single fractures were created with the point-source (PS) completions, and multiple fractures resulted from many of the select-fire (SF) completions. A good correlation was developed between productivitymore » and the product of formation permeability, net fracture height, bottomhole pressure, and propped fracture length. Results supported the continued development of 10 wells using the PS concept with a more efficient treatment design, resulting in substantial cost savings.« less

  16. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    PubMed Central

    Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.

    2018-01-01

    Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424

  17. Composite analysis for Escherichia coli at coastal beaches

    USGS Publications Warehouse

    Bertke, E.E.

    2007-01-01

    At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.

  18. NewProt - a protein engineering portal.

    PubMed

    Schwarte, Andreas; Genz, Maika; Skalden, Lilly; Nobili, Alberto; Vickers, Clare; Melse, Okke; Kuipers, Remko; Joosten, Henk-Jan; Stourac, Jan; Bendl, Jaroslav; Black, Jon; Haase, Peter; Baakman, Coos; Damborsky, Jiri; Bornscheuer, Uwe; Vriend, Gert; Venselaar, Hanka

    2017-06-01

    The NewProt protein engineering portal is a one-stop-shop for in silico protein engineering. It gives access to a large number of servers that compute a wide variety of protein structure characteristics supporting work on the modification of proteins through the introduction of (multiple) point mutations. The results can be inspected through multiple visualizers. The HOPE software is included to indicate mutations with possible undesired side effects. The Hotspot Wizard software is embedded for the design of mutations that modify a proteins' activity, specificity, or stability. The NewProt portal is freely accessible at http://newprot.cmbi.umcn.nl/ and http://newprot.fluidops.net/. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Alternatives for jet engine control

    NASA Technical Reports Server (NTRS)

    Leake, R. J.; Sain, M. K.

    1978-01-01

    General goals of the research were classified into two categories. The first category involves the use of modern multivariable frequency domain methods for control of engine models in the neighborhood of a quiescent point. The second category involves the use of nonlinear modelling and optimization techniques for control of engine models over a more extensive part of the flight envelope. In the frequency domain category, works were published in the areas of low-interaction design, polynomial design, and multiple setpoint studies. A number of these ideas progressed to the point at which they are starting to attract practical interest. In the nonlinear category, advances were made both in engine modelling and in the details associated with software for determination of time optimal controls. Nonlinear models for a two spool turbofan engine were expanded and refined; and a promising new approach to automatic model generation was placed under study. A two time scale scheme was developed to do two-dimensional dynamic programming, and an outward spiral sweep technique has greatly speeded convergence times in time optimal calculations.

  20. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  1. Control of broadband optically generated ultrasound pulses using binary amplitude holograms.

    PubMed

    Brown, Michael D; Jaros, Jiri; Cox, Ben T; Treeby, Bradley E

    2016-04-01

    In this work, the use of binary amplitude holography is investigated as a mechanism to focus broadband acoustic pulses generated by high peak-power pulsed lasers. Two algorithms are described for the calculation of the binary holograms; one using ray-tracing, and one using an optimization based on direct binary search. It is shown using numerical simulations that when a binary amplitude hologram is excited by a train of laser pulses at its design frequency, the acoustic field can be focused at a pre-determined distribution of points, including single and multiple focal points, and line and square foci. The numerical results are validated by acoustic field measurements from binary amplitude holograms, excited by a high peak-power laser.

  2. TIGGERC: Turbomachinery Interactive Grid Generator for 2-D Grid Applications and Users Guide

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1994-01-01

    A two-dimensional multi-block grid generator has been developed for a new design and analysis system for studying multiple blade-row turbomachinery problems. TIGGERC is a mouse driven, interactive grid generation program which can be used to modify boundary coordinates and grid packing and generates surface grids using a hyperbolic tangent or algebraic distribution of grid points on the block boundaries. The interior points of each block grid are distributed using a transfinite interpolation approach. TIGGERC can generate a blocked axisymmetric H-grid, C-grid, I-grid or O-grid for studying turbomachinery flow problems. TIGGERC was developed for operation on Silicon Graphics workstations. Detailed discussion of the grid generation methodology, menu options, operational features and sample grid geometries are presented.

  3. Virtual reality for automotive design evaluation

    NASA Technical Reports Server (NTRS)

    Dodd, George G.

    1995-01-01

    A general description of Virtual Reality technology and possible applications was given from publicly available material. A video tape was shown demonstrating the use of multiple large-screen stereoscopic displays, configured in a 10' x 10' x 10' room, to allow a person to evaluate and interact with a vehicle which exists only as mathematical data, and is made only of light. The correct viewpoint of the vehicle is maintained by tracking special glasses worn by the subject. Interior illumination was changed by moving a virtual light around by hand; interior colors are changed by pointing at a color on a color palette, then pointing at the desired surface to change. We concluded by discussing research needed to move this technology forward.

  4. Pointing with Power or Creating with Chalk

    ERIC Educational Resources Information Center

    Rudow, Sasha R.; Finck, Joseph E.

    2015-01-01

    This study examines the attitudes of students on the use of PowerPoint and chalk/white boards in college science lecture classes. Students were asked to complete a survey regarding their experiences with PowerPoint and chalk/white boards in their science classes. Both multiple-choice and short answer questions were used. The multiple-choice…

  5. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  6. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  7. Experimental design matters for statistical analysis: how to handle blocking.

    PubMed

    Jensen, Signe M; Schaarschmidt, Frank; Onofri, Andrea; Ritz, Christian

    2018-03-01

    Nowadays, evaluation of the effects of pesticides often relies on experimental designs that involve multiple concentrations of the pesticide of interest or multiple pesticides at specific comparable concentrations and, possibly, secondary factors of interest. Unfortunately, the experimental design is often more or less neglected when analysing data. Two data examples were analysed using different modelling strategies. First, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Second, translocation of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. It was shown that results from suboptimal approaches (two-sample t-tests and ordinary ANOVA assuming independent observations) may be both quantitatively and qualitatively different from the results obtained using an appropriate linear mixed model. The simulations demonstrated that the different approaches may lead to differences in coverage percentages of confidence intervals and type 1 error rates, confirming that misleading conclusions can easily happen when an inappropriate statistical approach is chosen. To ensure that experimental data are summarized appropriately, avoiding misleading conclusions, the experimental design should duly be reflected in the choice of statistical approaches and models. We recommend that author guidelines should explicitly point out that authors need to indicate how the statistical analysis reflects the experimental design. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  8. Realtime control of multiple-focus phased array heating patterns based on noninvasive ultrasound thermography.

    PubMed

    Casper, Andrew; Liu, Dalong; Ebbini, Emad S

    2012-01-01

    A system for the realtime generation and control of multiple-focus ultrasound phased-array heating patterns is presented. The system employs a 1-MHz, 64-element array and driving electronics capable of fine spatial and temporal control of the heating pattern. The driver is integrated with a realtime 2-D temperature imaging system implemented on a commercial scanner. The coordinates of the temperature control points are defined on B-mode guidance images from the scanner, together with the temperature set points and controller parameters. The temperature at each point is controlled by an independent proportional, integral, and derivative controller that determines the focal intensity at that point. Optimal multiple-focus synthesis is applied to generate the desired heating pattern at the control points. The controller dynamically reallocates the power available among the foci from the shared power supply upon reaching the desired temperature at each control point. Furthermore, anti-windup compensation is implemented at each control point to improve the system dynamics. In vitro experiments in tissue-mimicking phantom demonstrate the robustness of the controllers for short (2-5 s) and longer multiple-focus high-intensity focused ultrasound exposures. Thermocouple measurements in the vicinity of the control points confirm the dynamics of the temperature variations obtained through noninvasive feedback. © 2011 IEEE

  9. PROCAMS - A second generation multispectral-multitemporal data processing system for agricultural mensuration

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Nalepka, R. F.

    1976-01-01

    PROCAMS (Prototype Classification and Mensuration System) has been designed for the classification and mensuration of agricultural crops (specifically small grains including wheat, rye, oats, and barley) through the use of data provided by Landsat. The system includes signature extension as a major feature and incorporates multitemporal as well as early season unitemporal approaches for using multiple training sites. Also addressed are partial cloud cover and cloud shadows, bad data points and lines, as well as changing sun angle and atmospheric state variations.

  10. Video-Based Intervention in Teaching Fraction Problem-Solving to Students with Autism Spectrum Disorder.

    PubMed

    Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin

    2015-09-01

    The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.

  11. Analysis of the Yule-Nielsen effect with the multiple-path point spread function in a frequency-modulated halftone.

    PubMed

    Rogers, Geoffrey

    2018-06-01

    The Yule-Nielsen effect is an influence on halftone color caused by the diffusion of light within the paper upon which the halftone ink is printed. The diffusion can be characterized by a point spread function. In this paper, a point spread function for paper is derived using the multiple-path model of reflection. This model treats the interaction of light with turbid media as a random walk. Using the multiple-path point spread function, a general expression is derived for the average reflectance of light from a frequency-modulated halftone, in which dot size is constant and the number of dots is varied, with the arrangement of dots random. It is also shown that the line spread function derived from the multiple-path model has the form of a Lorentzian function.

  12. Design of MPPT Controller Monitoring Software Based on QT Framework

    NASA Astrophysics Data System (ADS)

    Meng, X. Z.; Lu, P. G.

    2017-10-01

    The MPPT controller was a hardware device for tracking the maximum power point of solar photovoltaic array. Multiple controllers could be working as networking mode by specific communicating protocol. In this article, based on C++ GUI programming with Qt frame, we designed one sort of desktop application for monitoring and analyzing operational parameter of MPPT controller. The type of communicating protocol for building network was Modbus protocol which using Remote Terminal Unit mode and The desktop application of host computer was connected with all the controllers in the network through RS485 communication or ZigBee wireless communication. Using this application, user could monitor the parameter of controller wherever they were by internet.

  13. Land vehicle antennas for satellite mobile communications

    NASA Technical Reports Server (NTRS)

    Haddad, H. A.; Paschen, D.; Pieper, B. V.

    1985-01-01

    Antenna designs applicable to future satellite mobile vehicle communications are examined. Microstrip disk, quadrifilar helix, cylindrical microstrip, and inverted V and U crossed-dipole low gain antennas (3-5 dBic) that provide omnidirectional coverage are described. Diagrams of medium gain antenna (9-12 dBic) concepts are presented; the antennas are classified into three types: (1) electronically steered with digital phase shifters; (2) electronically switched with switchable power divider/combiner; and (3) mechanically steered with motor. The operating characteristics of a conformal antenna with electronic beam steering and a nonconformal design with mechanical steering are evaluated with respect to isolation levels in a multiple satellite system. Vehicle antenna pointing systems and antenna system costs are investigated.

  14. Attitude control challenges for earth orbiters of the 1980's

    NASA Technical Reports Server (NTRS)

    Hibbard, W.

    1980-01-01

    Experience gained in designing attitude control systems for orbiting spacecraft of the late 1980's is related. Implications for satellite attitude control design of the guidance capabilities, rendezvous and recovery requirements, use of multiple-use spacecraft and the development of large spacecraft associated with the advent of the Space Shuttle are considered. Attention is then given to satellite attitude control requirements posed by the Tracking and Data Relay Satellite System, the Global Positioning System, the NASA End-to-End Data System, and Shuttle-associated subsatellites. The anticipated completion and launch of the Space Telescope, which will provide one of the first experiences with the new generation of attitude control, is also pointed out.

  15. Motion laws synthesis for cam mechanisms with multiple follower displacement

    NASA Astrophysics Data System (ADS)

    Podgornyj, Yu I.; Skeeba, V. Yu; Kirillov, A. V.; Martynova, T. G.; Skeeba, P. Yu

    2018-03-01

    The research discusses the cam mechanisms design. The analysis of specialized literature indicates that the synthesis of the cam mechanisms laws of motion is currently done mainly by a standard set of acceleration curves. In some cases, the designer needs to synthesize a new acceleration law which should be task-specific and enforce a certain production step. The values of the technological loads and inertia forces loads generated by the mechanism are calculated to analyze the slay mechanism behavior in the production of closely woven fabrics. Mathematical packages MathCad and SolidWorks are used in calculations. As a result of the research, the authors propose the methodology for synthesizing the slay mechanism with multiple follower displacements for the point of contact between the reed and the fabric edge. Theoretical studies have been tested on a specific machine model (STB loom). The authors have synthesized the motion law of the filling threads beat-up mechanism for the production of strong fabrics. New basic and closing cam profiles are proposed. The results are designed to enhance the possibilities of the looms and to recommend the most efficient equipment operation modes for the producers.

  16. The Multiple Control of Verbal Behavior

    PubMed Central

    Michael, Jack; Palmer, David C; Sundberg, Mark L

    2011-01-01

    Amid the novel terms and original analyses in Skinner's Verbal Behavior, the importance of his discussion of multiple control is easily missed, but multiple control of verbal responses is the rule rather than the exception. In this paper we summarize and illustrate Skinner's analysis of multiple control and introduce the terms convergent multiple control and divergent multiple control. We point out some implications for applied work and discuss examples of the role of multiple control in humor, poetry, problem solving, and recall. Joint control and conditional discrimination are discussed as special cases of multiple control. We suggest that multiple control is a useful analytic tool for interpreting virtually all complex behavior, and we consider the concepts of derived relations and naming as cases in point. PMID:22532752

  17. New fast DCT algorithms based on Loeffler's factorization

    NASA Astrophysics Data System (ADS)

    Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon

    2012-10-01

    This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.

  18. Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.

    PubMed

    Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P

    2017-09-01

    Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.

  19. Advanced multiphoton methods for in vitro and in vivo functional imaging of mouse retinal neurons (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cohen, Noam; Schejter, Adi; Farah, Nairouz; Shoham, Shy

    2016-03-01

    Studying the responses of retinal ganglion cell (RGC) populations has major significance in vision research. Multiphoton imaging of optogenetic probes has recently become the leading approach for visualizing neural populations and has specific advantages for imaging retinal activity during visual stimulation, because it leads to reduced direct photoreceptor excitation. However, multiphoton retinal activity imaging is not straightforward: point-by-point scanning leads to repeated neural excitation while optical access through the rodent eye in vivo has proven highly challenging. Here, we present two enabling optical designs for multiphoton imaging of responses to visual stimuli in mouse retinas expressing calcium indicators. First, we present an imaging solution based on Scanning Line Temporal Focusing (SLITE) for rapidly imaging neuronal activity in vitro. In this design, we scan a temporally focused line rather than a point, increasing the scan speed and reducing the impact of repeated excitation, while maintaining high optical sectioning. Second, we present the first in vivo demonstration of two-photon imaging of RGC activity in the mouse retina. To obtain these cellular resolution recordings we integrated an illumination path into a correction-free imaging system designed using an optical model of the mouse eye. This system can image at multiple depths using an electronically tunable lens integrated into its optical path. The new optical designs presented here overcome a number of outstanding obstacles, allowing the study of rapid calcium- and potentially even voltage-indicator signals both in vitro and in vivo, thereby bringing us a step closer toward distributed monitoring of action potentials.

  20. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  1. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  2. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Interplanetary Mission Design Using Chemical Propulsion

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.

    2015-01-01

    Preliminary design of high-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys and the bodies at which those flybys are performed. For some missions, such as surveys of small bodies, the mission designer also contributes to target selection. In addition, real-valued decision variables, such as launch epoch, flight times, maneuver and flyby epochs, and flyby altitudes must be chosen. There are often many thousands of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the impulsive mission design problem as a multiobjective hybrid optimal control problem. The method is demonstrated on several real-world problems.

  3. Detecting false positives in multielement designs: implications for brief assessments.

    PubMed

    Bartlett, Sara M; Rapp, John T; Henrickson, Marissa L

    2011-11-01

    The authors assessed the extent to which multielement designs produced false positives using continuous duration recording (CDR) and interval recording with 10-s and 1-min interval sizes. Specifically, they created 6,000 graphs with multielement designs that varied in the number of data paths, and the number of data points per data path, using a random number generator. In Experiment 1, the authors visually analyzed the graphs for the occurrence of false positives. Results indicated that graphs depicting only two sessions for each condition (e.g., a control condition plotted with multiple test conditions) produced the highest percentage of false positives for CDR and interval recording with 10-s and 1-min intervals. Conversely, graphs with four or five sessions for each condition produced the lowest percentage of false positives for each method. In Experiment 2, they applied two new rules, which were intended to decrease false positives, to each graph that depicted a false positive in Experiment 1. Results showed that application of new rules decreased false positives to less than 5% for all of the graphs except for those with two data paths and two data points per data path. Implications for brief assessments are discussed.

  4. [Design of visualized medical images network and web platform based on MeVisLab].

    PubMed

    Xiang, Jun; Ye, Qing; Yuan, Xun

    2017-04-01

    With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.

  5. GLRS-R 2-colour retroreflector target design and predicted performance

    NASA Technical Reports Server (NTRS)

    Lund, Glenn

    1993-01-01

    This paper reports on the retroreflector ground-target design for the GLRS-R spaceborne dual-wavelength laser ranging system. The described passive design flows down from the requirements of high station autonomy, high global FOV (up to 60 degrees zenith angle), little or no multiple pulse returns, and adequate optical cross section for most ranging geometries. The proposed solution makes use of 5 hollow cube-corner retroreflectors of which one points to the zenith and the remaining four are inclined from the vertical at uniform azimuthal spacings. The need for fairly large (is approximately 10 cm) retroreflectors is expected (within turbulence limitations) to generate quite narrow diffraction lobes, thus placing non-trivial requirements on the vectorial accuracy of velocity aberration corrections. A good compromise solution is found by appropriately spoiling just one of the retroreflector dihedral angles from 90 degrees, thus generating two symmetrically oriented diffraction lobes in the return beam. The required spoil angles are found to have little dependence on ground target latitude. Various link budget analyses are presented, showing the influence of such factors as point-ahead optimization, turbulence, ranging angle, atmospheric visibility and ground target thermal deformations.

  6. GLRS-R 2-colour retroreflector target design and predicted performance

    NASA Astrophysics Data System (ADS)

    Lund, Glenn

    1993-06-01

    This paper reports on the retroreflector ground-target design for the GLRS-R spaceborne dual-wavelength laser ranging system. The described passive design flows down from the requirements of high station autonomy, high global FOV (up to 60 degrees zenith angle), little or no multiple pulse returns, and adequate optical cross section for most ranging geometries. The proposed solution makes use of 5 hollow cube-corner retroreflectors of which one points to the zenith and the remaining four are inclined from the vertical at uniform azimuthal spacings. The need for fairly large (is approximately 10 cm) retroreflectors is expected (within turbulence limitations) to generate quite narrow diffraction lobes, thus placing non-trivial requirements on the vectorial accuracy of velocity aberration corrections. A good compromise solution is found by appropriately spoiling just one of the retroreflector dihedral angles from 90 degrees, thus generating two symmetrically oriented diffraction lobes in the return beam. The required spoil angles are found to have little dependence on ground target latitude. Various link budget analyses are presented, showing the influence of such factors as point-ahead optimization, turbulence, ranging angle, atmospheric visibility and ground target thermal deformations.

  7. Corrosion behaviour of steel rebars embedded in a concrete designed for the construction of an intermediate-level radioactive waste disposal facility

    NASA Astrophysics Data System (ADS)

    Duffó, G. S.; Arva, E. A.; Schulz, F. M.; Vazquez, D. R.

    2013-07-01

    The National Atomic Energy Commission of the Argentine Republic is developing a nuclear waste disposal management programme that contemplates the design and construction of a facility for the final disposal of intermediate-level radioactive wastes. The repository is based on the use of multiple, independent and redundant barriers. The major components are made in reinforced concrete so, the durability of these structures is an important aspect for the facility integrity. This work presents an investigation performed on an instrumented reinforced concrete prototype specifically designed for this purpose, to study the behaviour of an intermediate level radioactive waste disposal facility from the rebar corrosion point of view. The information obtained will be used for the final design of the facility in order to guarantee a service life more or equal than the foreseen durability for this type of facilities.

  8. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  9. Does hippotherapy effect use of sensory information for balance in people with multiple sclerosis?

    PubMed

    Lindroth, Jodi L; Sullivan, Jessica L; Silkwood-Sherer, Debbie

    2015-01-01

    This case-series study aimed to determine if there were observable changes in sensory processing for postural control in individuals with multiple sclerosis (MS) following physical therapy using hippotherapy (HPOT), or changes in balance and functional gait. This pre-test non-randomized design study, with follow-up assessment at 6 weeks, included two females and one male (age range 37-60 years) with diagnoses of relapse-remitting or progressive MS. The intervention consisted of twelve 40-min physical therapy sessions which included HPOT twice a week for 6 weeks. Sensory organization and balance were assessed by the Sensory Organization Test (SOT) and Berg Balance Scale (BBS). Gait was assessed using the Functional Gait Assessment (FGA). Following the intervention period, all three participants showed improvements in SOT (range 1-8 points), BBS (range 2-6 points), and FGA (average 4 points) scores. These improvements were maintained or continued to improve at follow-up assessment. Two of the three participants no longer over-relied on vision and/or somatosensory information as the primary sensory input for postural control, suggesting improved use of sensory information for balance. The results indicate that HPOT may be a beneficial physical therapy treatment strategy to improve balance, functional gait, and enhance how some individuals with MS process sensory cues for postural control. Randomized clinical trials will be necessary to validate results of this study.

  10. The Born approximation, multiple scattering, and the butterfly algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alejandro F.

    Radar works by focusing a beam of light and seeing how long it takes to reflect. To see a large region the beam is pointed in different directions. The focus of the beam depends on the size of the antenna (called an aperture). Synthetic aperture radar (SAR) works by moving the antenna through some region of space. A fundamental assumption in SAR is that waves only bounce once. Several imaging algorithms have been designed using that assumption. The scattering process can be described by iterations of a badly behaving integral. Recently a method for efficiently evaluating these types of integrals has been developed. We will give a detailed implementation of this algorithm and apply it to study the multiple scattering effects in SAR using target estimates from single scattering algorithms.

  11. Gain-scheduling multivariable LPV control of an irrigation canal system.

    PubMed

    Bolea, Yolanda; Puig, Vicenç

    2016-07-01

    The purpose of this paper is to present a multivariable linear parameter varying (LPV) controller with a gain scheduling Smith Predictor (SP) scheme applicable to open-flow canal systems. This LPV controller based on SP is designed taking into account the uncertainty in the estimation of delay and the variation of plant parameters according to the operating point. This new methodology can be applied to a class of delay systems that can be represented by a set of models that can be factorized into a rational multivariable model in series with left/right diagonal (multiple) delays, such as, the case of irrigation canals. A multiple pool canal system is used to test and validate the proposed control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Multi-EMR Structured Data Entry Form: User-Acceptance Testing of a Prototype.

    PubMed

    Zavar, Abbas; Keshavjee, Karim

    2017-01-01

    Capturing standardized data from multiple EMRs at the point of care is highly desirable for a variety of uses, including quality improvement programs, multi-centered clinical trials and clinical decision support. In this paper, we describe the design, development and user acceptance testing of a prototype web-based form (the Form) that can integrate with multiple EMRs. We used the validated UTAUT questionnaire to assess the likelihood of uptake of the Form into clinical practice. The Form was found to be easy to use, elicits low anxiety, supports productivity and is perceived to have good support. Users would benefit from training and from better social signaling about the importance of using the Form in their practice. Making the Form more fun and interesting could help increase uptake.

  13. A Seat Around the Table: Participatory Data Analysis With People Living With Dementia.

    PubMed

    Clarke, Charlotte L; Wilkinson, Heather; Watson, Julie; Wilcockson, Jane; Kinnaird, Lindsay; Williamson, Toby

    2018-05-01

    The involvement of "people with experience" in research has developed considerably in the last decade. However, involvement as co-analysts at the point of data analysis and synthesis has received very little attention-in particular, there is very little work that involves people living with dementia as co-analysts. In this qualitative secondary data analysis project, we (a) analyzed data through two theoretical lenses: Douglas's cultural theory of risk and Tronto's Ethic of Care, and (b) analyzed data in workshops with people living with dementia. The design involved cycles of presenting, interpreting, representing and reinterpreting the data, and findings between multiple stakeholders. We explore ways of involving people with experience as co-analysts and explore the role of reflexivity, multiple voicing, literary styling, and performance in participatory data analysis.

  14. Context-dependent logo matching and recognition.

    PubMed

    Sahbi, Hichem; Ballan, Lamberto; Serra, Giuseppe; Del Bimbo, Alberto

    2013-03-01

    We contribute, through this paper, to the design of a novel variational framework able to match and recognize multiple instances of multiple reference logos in image archives. Reference logos and test images are seen as constellations of local features (interest points, regions, etc.) and matched by minimizing an energy function mixing: 1) a fidelity term that measures the quality of feature matching, 2) a neighborhood criterion that captures feature co-occurrence/geometry, and 3) a regularization term that controls the smoothness of the matching solution. We also introduce a detection/recognition procedure and study its theoretical consistency. Finally, we show the validity of our method through extensive experiments on the challenging MICC-Logos dataset. Our method overtakes, by 20%, baseline as well as state-of-the-art matching/recognition procedures.

  15. N7 logic via patterning using templated DSA: implementation aspects

    NASA Astrophysics Data System (ADS)

    Bekaert, J.; Doise, J.; Gronheid, R.; Ryckaert, J.; Vandenberghe, G.; Fenger, G.; Her, Y. J.; Cao, Y.

    2015-07-01

    In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCP). Insertion of DSA for IC fabrication is seriously considered for the 7 nm node. At this node the DSA technology could alleviate costs for multiple patterning and limit the number of masks that would be required per layer. At imec, multiple approaches for inserting DSA into the 7 nm node are considered. One of the most straightforward approaches for implementation would be for via patterning through templated DSA; a grapho-epitaxy flow using cylindrical phase BCP material resulting in contact hole multiplication within a litho-defined pre-pattern. To be implemented for 7 nm node via patterning, not only the appropriate process flow needs to be available, but also DSA-aware mask decomposition is required. In this paper, several aspects of the imec approach for implementing templated DSA will be discussed, including experimental demonstration of density effect mitigation, DSA hole pattern transfer and double DSA patterning, creation of a compact DSA model. Using an actual 7 nm node logic layout, we derive DSA-friendly design rules in a logical way from a lithographer's view point. A concrete assessment is provided on how DSA-friendly design could potentially reduce the number of Via masks for a place-and-routed N7 logic pattern.

  16. Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.

    PubMed

    Steel, Ruth Irene

    2015-01-01

    Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.

  17. Multiple excitation nano-spot generation and confocal detection for far-field microscopy.

    PubMed

    Mondal, Partha Pratim

    2010-03-01

    An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.

  18. Multiple excitation nano-spot generation and confocal detection for far-field microscopy

    NASA Astrophysics Data System (ADS)

    Mondal, Partha Pratim

    2010-03-01

    An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.

  19. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  20. MAGENCO: A map generalization controller for Arc/Info

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.; Cashwell, J.W.

    The Arc/Info GENERALIZE command implements the Douglas-Peucker algorithm, a well-regarded approach that preserves line ``character`` while reducing the number of points according to a tolerance parameter supplied by the user. The authors have developed an Arc Macro Language (AML) interface called MAGENCO that allows the user to browse workspaces, select a coverage, extract a sample from this coverage, then apply various tolerances to the sample. The results are shown in multiple display windows that are arranged around the original sample for quick visual comparison. The user may then return to the whole coverage and apply the chosen tolerance. They analyzemore » the ergonomics of line simplification, explain the design (which includes an animated demonstration of the Douglas-Peucker algorithm), and discuss key points of the MAGENCO implementation.« less

  1. Equilibrium points and associated periodic orbits in the gravity of binary asteroid systems: (66391) 1999 KW4 as an example

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Wang, Yue; Xu, Shijie

    2018-04-01

    The motion of a massless particle in the gravity of a binary asteroid system, referred as the restricted full three-body problem (RF3BP), is fundamental, not only for the evolution of the binary system, but also for the design of relevant space missions. In this paper, equilibrium points and associated periodic orbit families in the gravity of a binary system are investigated, with the binary (66391) 1999 KW4 as an example. The polyhedron shape model is used to describe irregular shapes and corresponding gravity fields of the primary and secondary of (66391) 1999 KW4, which is more accurate than the ellipsoid shape model in previous studies and provides a high-fidelity representation of the gravitational environment. Both of the synchronous and non-synchronous states of the binary system are considered. For the synchronous binary system, the equilibrium points and their stability are determined, and periodic orbit families emanating from each equilibrium point are generated by using the shooting (multiple shooting) method and the homotopy method, where the homotopy function connects the circular restricted three-body problem and RF3BP. In the non-synchronous binary system, trajectories of equivalent equilibrium points are calculated, and the associated periodic orbits are obtained by using the homotopy method, where the homotopy function connects the synchronous and non-synchronous systems. Although only the binary (66391) 1999 KW4 is considered, our methods will also be well applicable to other binary systems with polyhedron shape data. Our results on equilibrium points and associated periodic orbits provide general insights into the dynamical environment and orbital behaviors in proximity of small binary asteroids and enable the trajectory design and mission operations in future binary system explorations.

  2. A Multiple-Sequence Variant of the Multiple-Baseline Design: A Strategy for Analysis of Sequence Effects and Treatment Comparison.

    ERIC Educational Resources Information Center

    Noell, George H.; Gresham, Frank M.

    2001-01-01

    Describes design logic and potential uses of a variant of the multiple-baseline design. The multiple-baseline multiple-sequence (MBL-MS) consists of multiple-baseline designs that are interlaced with one another and include all possible sequences of treatments. The MBL-MS design appears to be primarily useful for comparison of treatments taking…

  3. Electronic Imaging: Rochester Imaging Consortium, Abstracts of Research Topics Reported at the Annual Meeting of the Optical Society of America Held in San Jose, California on 3-8 November 1991

    DTIC Science & Technology

    1991-11-01

    Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George 14. SUBJECT TERMS 15. NUMBER OF PAGES...Keith B. Farr Nicholas George Backscatter from a Tilted Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse ...correlation components. Uf) c)z 0 CL C/) Ix I- z 0 0 LL C,z -J a 0l IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSES Bryan J. Stossel and Nicholas George

  4. SU-E-T-167: Characterization of In-House Plastic Scintillator Detectors Array for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Liu, H; Dimofte, A

    Purpose: To characterize basic performance of plastic scintillator detectors (PSD) array designed for dosimetry of radiation therapy. Methods: An in-house PSD array has been developed by placing single point PSD into customized 2D holder. Each point PSD is a plastic scintillating fiber-based detector designed for highly accurate measurement of small radiotherapy fields used in patient plan verification and machine commissioning and QA procedures. A parallel fiber without PSD is used for Cerenkov separation by subtracting from PSD readings. Cerenkov separation was confirmed by optical spectroscopy. Alternative Cerenkov separation approaches are also investigated. The optical signal was converted to electronic signalmore » with a photodiode and then subsequently amplified. We measured its dosimetry performance, including percentage depth dose and output factor, and compared with reference ion chamber measurements. The PSD array is then placed along the radiation beam for multiple point dose measurement, representing subsets of PDD measurements, or perpendicular to the beam for profile measurements. Results: The dosimetry results of PSD point measurements agree well with reference ion chamber measurements. For percentage depth dose, the maximal differences between PSD and ion chamber results are 3.5% and 2.7% for 6MV and 15MV beams, respectively. For the output factors, PSD measurements are within 3% from ion chamber results. PDD and profile measurement with PSD array are also performed. Conclusions: The current design of multichannel PSD array is feasible for the dosimetry measurement in radiation therapy. Dose distribution along or perpendicular to the beam path could be measured. It might as well be used as range verification in proton therapy.A PS hollow fiber detector will be investigated to eliminate the Cerenkov radiation effect so that all 32 channels can be used.« less

  5. Illusion induced overlapped optics.

    PubMed

    Zang, XiaoFei; Shi, Cheng; Li, Zhou; Chen, Lin; Cai, Bin; Zhu, YiMing; Zhu, HaiBin

    2014-01-13

    The traditional transformation-based cloak seems like it can only hide objects by bending the incident electromagnetic waves around the hidden region. In this paper, we prove that invisible cloaks can be applied to realize the overlapped optics. No matter how many in-phase point sources are located in the hidden region, all of them can overlap each other (this can be considered as illusion effect), leading to the perfect optical interference effect. In addition, a singular parameter-independent cloak is also designed to obtain quasi-overlapped optics. Even more amazing of overlapped optics is that if N identical separated in-phase point sources covered with the illusion media, the total power outside the transformation region is N2I0 (not NI0) (I0 is the power of just one point source, and N is the number point sources), which seems violating the law of conservation of energy. A theoretical model based on interference effect is proposed to interpret the total power of these two kinds of overlapped optics effects. Our investigation may have wide applications in high power coherent laser beams, and multiple laser diodes, and so on.

  6. The QKD network: model and routing scheme

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Zhang, Hongqi; Su, Jinhai

    2017-11-01

    Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.

  7. Access to Mars from Earth-Moon Libration Point Orbits:. [Manifold and Direct Options

    NASA Technical Reports Server (NTRS)

    Kakoi, Masaki; Howell, Kathleen C.; Folta, David

    2014-01-01

    This investigation is focused specifically on transfers from Earth-Moon L(sub 1)/L(sub 2) libration point orbits to Mars. Initially, the analysis is based in the circular restricted three-body problem to utilize the framework of the invariant manifolds. Various departure scenarios are compared, including arcs that leverage manifolds associated with the Sun-Earth L(sub 2) orbits as well as non-manifold trajectories. For the manifold options, ballistic transfers from Earth-Moon L(sub 2) libration point orbits to Sun-Earth L(sub 1)/L(sub 2) halo orbits are first computed. This autonomous procedure applies to both departure and arrival between the Earth-Moon and Sun-Earth systems. Departure times in the lunar cycle, amplitudes and types of libration point orbits, manifold selection, and the orientation/location of the surface of section all contribute to produce a variety of options. As the destination planet, the ephemeris position for Mars is employed throughout the analysis. The complete transfer is transitioned to the ephemeris model after the initial design phase. Results for multiple departure/arrival scenarios are compared.

  8. Optical Design Trade Study for the Wide Field Infrared Survey Telescope [WFIRST

    NASA Technical Reports Server (NTRS)

    Content, David A.; Goullioud, R.; Lehan, John P.; Mentzell, John E.

    2011-01-01

    The Wide Field Infrared Survey Telescope (WFIRST) mission concept was ranked first in new space astrophysics mission by the Astro2010 Decadal Survey incorporating the Joint Dark Energy Mission (JDEM)-Omega payload concept and multiple science white papers. This mission is based on a space telescope at L2 studying exoplanets [via gravitational microlensing], probing dark energy, and surveying the near infrared sky. Since the release of NWNH, the WFIRST project has been working with the WFIRST science definition team (SDT) to refine mission and payload concepts. We present the driving requirements. The current interim reference mission point design, based on the use of a 1.3m unobscured aperture three mirror anastigmat form, with focal imaging and slitless spectroscopy science channels, is consistent with the requirements, requires no technology development, and out performs the JDEM-Omega design.

  9. Assisting People with Multiple Disabilities and Minimal Motor Behavior to Improve Computer Pointing Efficiency through a Mouse Wheel

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Chang, Man-Ling; Shih, Ching-Tien

    2009-01-01

    This study evaluated whether two people with multiple disabilities and minimal motor behavior would be able to improve their pointing performance using finger poke ability with a mouse wheel through a Dynamic Pointing Assistive Program (DPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, changes a…

  10. Assisting People with Multiple Disabilities Improve Their Computer-Pointing Efficiency with Hand Swing through a Standard Mouse

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Chiu, Sheng-Kai; Chu, Chiung-Ling; Shih, Ching-Tien; Liao, Yung-Kun; Lin, Chia-Chen

    2010-01-01

    This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance using hand swing with a standard mouse through an Extended Dynamic Pointing Assistive Program (EDPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and changes a mouse into a precise…

  11. Dynamic Gate Product and Artifact Generation from System Models

    NASA Technical Reports Server (NTRS)

    Jackson, Maddalena; Delp, Christopher; Bindschadler, Duane; Sarrel, Marc; Wollaeger, Ryan; Lam, Doris

    2011-01-01

    Model Based Systems Engineering (MBSE) is gaining acceptance as a way to formalize systems engineering practice through the use of models. The traditional method of producing and managing a plethora of disjointed documents and presentations ("Power-Point Engineering") has proven both costly and limiting as a means to manage the complex and sophisticated specifications of modern space systems. We have developed a tool and method to produce sophisticated artifacts as views and by-products of integrated models, allowing us to minimize the practice of "Power-Point Engineering" from model-based projects and demonstrate the ability of MBSE to work within and supersede traditional engineering practices. This paper describes how we have created and successfully used model-based document generation techniques to extract paper artifacts from complex SysML and UML models in support of successful project reviews. Use of formal SysML and UML models for architecture and system design enables production of review documents, textual artifacts, and analyses that are consistent with one-another and require virtually no labor-intensive maintenance across small-scale design changes and multiple authors. This effort thus enables approaches that focus more on rigorous engineering work and less on "PowerPoint engineering" and production of paper-based documents or their "office-productivity" file equivalents.

  12. Bayesian design criteria: computation, comparison, and application to a pharmacokinetic and a pharmacodynamic model.

    PubMed

    Merlé, Y; Mentré, F

    1995-02-01

    In this paper 3 criteria to design experiments for Bayesian estimation of the parameters of nonlinear models with respect to their parameters, when a prior distribution is available, are presented: the determinant of the Bayesian information matrix, the determinant of the pre-posterior covariance matrix, and the expected information provided by an experiment. A procedure to simplify the computation of these criteria is proposed in the case of continuous prior distributions and is compared with the criterion obtained from a linearization of the model about the mean of the prior distribution for the parameters. This procedure is applied to two models commonly encountered in the area of pharmacokinetics and pharmacodynamics: the one-compartment open model with bolus intravenous single-dose injection and the Emax model. They both involve two parameters. Additive as well as multiplicative gaussian measurement errors are considered with normal prior distributions. Various combinations of the variances of the prior distribution and of the measurement error are studied. Our attention is restricted to designs with limited numbers of measurements (1 or 2 measurements). This situation often occurs in practice when Bayesian estimation is performed. The optimal Bayesian designs that result vary with the variances of the parameter distribution and with the measurement error. The two-point optimal designs sometimes differ from the D-optimal designs for the mean of the prior distribution and may consist of replicating measurements. For the studied cases, the determinant of the Bayesian information matrix and its linearized form lead to the same optimal designs. In some cases, the pre-posterior covariance matrix can be far from its lower bound, namely, the inverse of the Bayesian information matrix, especially for the Emax model and a multiplicative measurement error. The expected information provided by the experiment and the determinant of the pre-posterior covariance matrix generally lead to the same designs except for the Emax model and the multiplicative measurement error. Results show that these criteria can be easily computed and that they could be incorporated in modules for designing experiments.

  13. Visualization of Time-Series Sensor Data to Inform the Design of Just-In-Time Adaptive Stress Interventions.

    PubMed

    Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh

    2015-09-01

    We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.

  14. Robust design of multiple trailing edge flaps for helicopter vibration reduction: A multi-objective bat algorithm approach

    NASA Astrophysics Data System (ADS)

    Mallick, Rajnish; Ganguli, Ranjan; Seetharama Bhat, M.

    2015-09-01

    The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.

  15. Watershed-based survey designs

    USGS Publications Warehouse

    Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.

    2005-01-01

    Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.

  16. Visualization of Time-Series Sensor Data to Inform the Design of Just-In-Time Adaptive Stress Interventions

    PubMed Central

    Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh

    2015-01-01

    We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566

  17. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Low-Thrust Mission Design

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Ghosh, Alexander R.

    2015-01-01

    Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on a hypothetical mission to the main asteroid belt.

  18. Program For Engineering Electrical Connections

    NASA Technical Reports Server (NTRS)

    Billitti, Joseph W.

    1990-01-01

    DFACS is interactive multiuser computer-aided-engineering software tool for system-level electrical integration and cabling engineering. Purpose of program to provide engineering community with centralized data base for putting in and gaining access to data on functional definition of system, details of end-circuit pinouts in systems and subsystems, and data on wiring harnesses. Objective, to provide instantaneous single point of interchange of information, thus avoiding error-prone, time-consuming, and costly shuttling of data along multiple paths. Designed to operate on DEC VAX mini or micro computer using Version 5.0/03 of INGRES.

  19. The tip/tilt tracking sensor based on multi-anode photo-multiplier tube

    NASA Astrophysics Data System (ADS)

    Ma, Xiao-yu; Rao, Chang-hui; Tian, Yu; Wei, Kai

    2013-09-01

    Based on the demands of high sensitivity, precision and frame rate of tip/tilt tracking sensors in acquisition, tracking and pointing (ATP) systems for satellite-ground optical communications, this paper proposes to employ the multiple-anode photo-multiplier tubes (MAPMTs) in tip/tilt tracking sensors. Meanwhile, an array-type photon-counting system was designed to meet the requirements of the tip/tilt tracking sensors. The experiment results show that the tip/tilt tracking sensors based on MAPMTs can achieve photon sensitivity and high frame rate as well as low noise.

  20. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach

    PubMed Central

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-01-01

    Objectives The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Setting Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. Participants We included 62 interviews from 44 patients and 18 non-clinical caregivers. Intervention Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. Outcome measures This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. Results We identified 5 broad themes that capture the patients’ experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients’ experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Conclusions Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and modification. PMID:26351182

  1. Intra- and inter-brain synchronization during musical improvisation on the guitar.

    PubMed

    Müller, Viktor; Sänger, Johanna; Lindenberger, Ulman

    2013-01-01

    Humans interact with the environment through sensory and motor acts. Some of these interactions require synchronization among two or more individuals. Multiple-trial designs, which we have used in past work to study interbrain synchronization in the course of joint action, constrain the range of observable interactions. To overcome the limitations of multiple-trial designs, we conducted single-trial analyses of electroencephalography (EEG) signals recorded from eight pairs of guitarists engaged in musical improvisation. We identified hyper-brain networks based on a complex interplay of different frequencies. The intra-brain connections primarily involved higher frequencies (e.g., beta), whereas inter-brain connections primarily operated at lower frequencies (e.g., delta and theta). The topology of hyper-brain networks was frequency-dependent, with a tendency to become more regular at higher frequencies. We also found hyper-brain modules that included nodes (i.e., EEG electrodes) from both brains. Some of the observed network properties were related to musical roles during improvisation. Our findings replicate and extend earlier work and point to mechanisms that enable individuals to engage in temporally coordinated joint action.

  2. Intra- and Inter-Brain Synchronization during Musical Improvisation on the Guitar

    PubMed Central

    Müller, Viktor; Sänger, Johanna; Lindenberger, Ulman

    2013-01-01

    Humans interact with the environment through sensory and motor acts. Some of these interactions require synchronization among two or more individuals. Multiple-trial designs, which we have used in past work to study interbrain synchronization in the course of joint action, constrain the range of observable interactions. To overcome the limitations of multiple-trial designs, we conducted single-trial analyses of electroencephalography (EEG) signals recorded from eight pairs of guitarists engaged in musical improvisation. We identified hyper-brain networks based on a complex interplay of different frequencies. The intra-brain connections primarily involved higher frequencies (e.g., beta), whereas inter-brain connections primarily operated at lower frequencies (e.g., delta and theta). The topology of hyper-brain networks was frequency-dependent, with a tendency to become more regular at higher frequencies. We also found hyper-brain modules that included nodes (i.e., EEG electrodes) from both brains. Some of the observed network properties were related to musical roles during improvisation. Our findings replicate and extend earlier work and point to mechanisms that enable individuals to engage in temporally coordinated joint action. PMID:24040094

  3. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    PubMed Central

    Churkin, Alexander; Barash, Danny

    2008-01-01

    Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289

  4. Didactical suggestion for a Dynamic Hybrid Intelligent e-Learning Environment (DHILE) applying the PENTHA ID Model

    NASA Astrophysics Data System (ADS)

    dall'Acqua, Luisa

    2011-08-01

    The teleology of our research is to propose a solution to the request of "innovative, creative teaching", proposing a methodology to educate creative Students in a society characterized by multiple reference points and hyper dynamic knowledge, continuously subject to reviews and discussions. We apply a multi-prospective Instructional Design Model (PENTHA ID Model), defined and developed by our research group, which adopts a hybrid pedagogical approach, consisting of elements of didactical connectivism intertwined with aspects of social constructivism and enactivism. The contribution proposes an e-course structure and approach, applying the theoretical design principles of the above mentioned ID Model, describing methods, techniques, technologies and assessment criteria for the definition of lesson modes in an e-course.

  5. Co-designing for quality: Creating a user-driven tool to improve quality in youth mental health services.

    PubMed

    Hackett, Christina L; Mulvale, Gillian; Miatello, Ashleigh

    2018-04-29

    Although high quality mental health care for children and youth is a goal of many health systems, little is known about the dimensions of quality mental health care from users' perspectives. We engaged young people, caregivers and service providers to share experiences, which shed light on quality dimensions for youth mental health care. Using experience-based co-design, we collected qualitative data from young people aged 16-24 with a mental disorder (n = 19), identified caregivers (n = 12) and service providers (n = 14) about their experiences with respect to youth mental health services. Experience data were collected using multiple approaches including interviews, a suite of online and smartphone applications (n = 22), and a co-design event (n = 16) and analysed to extract touch points. These touch points were used to prioritize and co-design a user-driven prototype of a questionnaire to provide feedback to service providers. Young people, caregiver and service provider reports of service experiences were used to identify aspects of care quality at eight mental health service contact points: Access to mental health care; Transfer to/from hospital; Intake into hospital; Services provided; Assessment and treatment; Treatment environment; and Caregiver involvement in care. In some cases, low quality care was harmful to users and their caregivers. Young people co-designed a prototype of a user-driven feedback questionnaire to improve quality of service experiences that was supported by service providers and caregivers at the co-design event. By using EBCD to capture in-depth data regarding experiences of young people, their caregivers and service providers, study participants have begun to establish a baseline for acceptable quality of mental health care for young people. © 2018 The Authors. Health Expectations published by John Wiley & Sons Ltd.

  6. Optimizing the wireless power transfer over MIMO Channels

    NASA Astrophysics Data System (ADS)

    Wiedmann, Karsten; Weber, Tobias

    2017-09-01

    In this paper, the optimization of the power transfer over wireless channels having multiple-inputs and multiple-outputs (MIMO) is studied. Therefore, the transmitter, the receiver and the MIMO channel are modeled as multiports. The power transfer efficiency is described by a Rayleigh quotient, which is a function of the channel's scattering parameters and the incident waves from both transmitter and receiver side. This way, the power transfer efficiency can be maximized analytically by solving a generalized eigenvalue problem, which is deduced from the Rayleigh quotient. As a result, the maximum power transfer efficiency achievable over a given MIMO channel is obtained. This maximum can be used as a performance bound in order to benchmark wireless power transfer systems. Furthermore, the optimal operating point which achieves this maximum will be obtained. The optimal operating point will be described by the complex amplitudes of the optimal incident and reflected waves of the MIMO channel. This supports the design of the optimal transmitter and receiver multiports. The proposed method applies for arbitrary MIMO channels, taking transmitter-side and/or receiver-side cross-couplings in both near- and farfield scenarios into consideration. Special cases are briefly discussed in this paper in order to illustrate the method.

  7. 3D indoor modeling using a hand-held embedded system with multiple laser range scanners

    NASA Astrophysics Data System (ADS)

    Hu, Shaoxing; Wang, Duhu; Xu, Shike

    2016-10-01

    Accurate three-dimensional perception is a key technology for many engineering applications, including mobile mapping, obstacle detection and virtual reality. In this article, we present a hand-held embedded system designed for constructing 3D representation of structured indoor environments. Different from traditional vehicle-borne mobile mapping methods, the system presented here is capable of efficiently acquiring 3D data while an operator carrying the device traverses through the site. It consists of a simultaneous localization and mapping(SLAM) module, a 3D attitude estimate module and a point cloud processing module. The SLAM is based on a scan matching approach using a modern LIDAR system, and the 3D attitude estimate is generated by a navigation filter using inertial sensors. The hardware comprises three 2D time-flight laser range finders and an inertial measurement unit(IMU). All the sensors are rigidly mounted on a body frame. The algorithms are developed on the frame of robot operating system(ROS). The 3D model is constructed using the point cloud library(PCL). Multiple datasets have shown robust performance of the presented system in indoor scenarios.

  8. The importance and pitfalls of correlational science in palliative care research.

    PubMed

    Klepstad, Pål; Kaasa, Stein

    2012-12-01

    Correlational science discovers associations between patient characteristics, symptoms and biomarkers. Correlational science using data from cross-sectional studies is the most frequently applied study design in palliative care research. The purpose of this review is to address the importance and potential pitfalls in correlational science. Associations observed in correlational science studies can be the basis for generating hypotheses that can be tested in experimental studies and are the basic data needed to develop classification systems that can predict patient outcomes. Major pitfalls in correlational science are that associations do not equate with causality and that statistical significance does not necessarily equal a correlation that is of clinical interest. Researchers should be aware of the end-points that are clinically relevant, that end-points should be defined before the start of the analyses, and that studies with several end-points should account for multiplicity. Correlational science in palliative care research can identify related clinical factors and biomarkers. Interpretation of identified associations should be done with careful consideration of the limitations underlying correlational analyses.

  9. Field data sets for seagrass biophysical properties for the Eastern Banks, Moreton Bay, Australia, 2004-2014

    NASA Astrophysics Data System (ADS)

    Roelfsema, Chris M.; Kovacs, Eva M.; Phinn, Stuart R.

    2015-08-01

    This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2014) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.

  10. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun

    2017-04-01

    In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.

  11. Into the Deep Black Sea: The Icefin Modular AUV for Ice-Covered Ocean Exploration

    NASA Astrophysics Data System (ADS)

    Meister, M. R.; Schmidt, B. E.; West, M. E.; Walker, C. C.; Buffo, J.; Spears, A.

    2015-12-01

    The Icefin autonomous underwater vehicle (AUV) was designed to enable long-range oceanographic exploration of physical and biological ocean environments in ice-covered regions. The vehicle is capable of surveying under-ice geometry, ice and ice-ocean interface properties, as well as water column conditions beneath the ice interface. It was developed with both cryospheric and planetary-analog exploration in mind. The first Icefin prototype was successfully operated in Antarctica in Austral summer 2014. The vehicle was deployed through a borehole in the McMurdo Ice Shelf near Black Island and successfully collected sonar, imaging, video and water column data down to 450 m depth. Icefin was developed using a modular design. Each module is designed to perform specific tasks, dependent on the mission objective. Vehicle control and data systems can be stably developed, and power modules added or subtracted for mission flexibility. Multiple sensor bays can be developed in parallel to serve multiple science objectives. This design enables the vehicle to have greater depth capability as well as improved operational simplicity compared to larger vehicles with equivalent capabilities. As opposed to those vehicles that require greater logistics and associated costs, Icefin can be deployed through boreholes drilled in the ice. Thus, Icefin satisfies the demands of achieving sub-ice missions while maintaining a small form factor and easy deployment necessary for repeated, low-logistical impact field programs. The current Icefin prototype is 10.5 inches in diameter by 10 feet long and weighs 240 pounds. It is comprised of two thruster modules with hovering capabilities, an oceanographic sensing module, main control module and a forward-sensing module for obstacle avoidance. The oceanographic sensing module is fitted with a side scan sonar (SSS), CT sensor, altimetry profiler and Doplar Velocity Log (DVL) with current profiling. Icefin is depth-rated to 1500 m and is equipped with 3.5 km of fiber optic, Kevlar reinforced cable, which provides point-to-point communications as well as a stable recovery platform between missions. SUPPORT: Icefin was designed and built at Georgia Tech, under Dr. Britney Schmidt's startup funds with effort contributed from Georgia Tech Research Institute (GTRI).

  12. Photogrammetric Method and Software for Stream Planform Identification

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.

    2013-12-01

    Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points

  13. Numerical and In Vitro Experimental Investigation of the Hemolytic Performance at the Off-Design Point of an Axial Ventricular Assist Pump.

    PubMed

    Liu, Guang-Mao; Jin, Dong-Hai; Jiang, Xi-Hang; Zhou, Jian-Ye; Zhang, Yan; Chen, Hai-Bo; Hu, Sheng-Shou; Gui, Xing-Min

    The ventricular assist pumps do not always function at the design point; instead, these pumps may operate at unfavorable off-design points. For example, the axial ventricular assist pump FW-2, in which the design point is 5 L/min flow rate against 100 mm Hg pressure increase at 8,000 rpm, sometimes works at off-design flow rates of 1 to 4 L/min. The hemolytic performance of the FW-2 at both the design point and at off-design points was estimated numerically and tested in vitro. Flow characteristics in the pump were numerically simulated and analyzed with special attention paid to the scalar sheer stress and exposure time. An in vitro hemolysis test was conducted to verify the numerical results. The simulation results showed that the scalar shear stress in the rotor region at the 1 L/min off-design point was 70% greater than at the 5 L/min design point. The hemolysis index at the 1 L/min off-design point was 3.6 times greater than at the 5 L/min design point. The in vitro results showed that the normalized index of hemolysis increased from 0.017 g/100 L at the 5 L/min design point to 0.162 g/100 L at the 1 L/min off-design point. The hemolysis comparison between the different blood pump flow rates will be helpful for future pump design point selection and will guide the usage of ventricular assist pumps. The hemolytic performance of the blood pump at the working point in the clinic should receive more focus.

  14. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  15. Local search heuristic for the discrete leader-follower problem with multiple follower objectives

    NASA Astrophysics Data System (ADS)

    Kochetov, Yury; Alekseeva, Ekaterina; Mezmaz, Mohand

    2016-10-01

    We study a discrete bilevel problem, called as well as leader-follower problem, with multiple objectives at the lower level. It is assumed that constraints at the upper level can include variables of both levels. For such ill-posed problem we define feasible and optimal solutions for pessimistic case. A central point of this work is a two stage method to get a feasible solution under the pessimistic case, given a leader decision. The target of the first stage is a follower solution that violates the leader constraints. The target of the second stage is a pessimistic feasible solution. Each stage calls a heuristic and a solver for a series of particular mixed integer programs. The method is integrated inside a local search based heuristic that is designed to find near-optimal leader solutions.

  16. Using Palm Technology in Participatory Simulations of Complex Systems: A New Take on Ubiquitous and Accessible Mobile Computing

    NASA Astrophysics Data System (ADS)

    Klopfer, Eric; Yoon, Susan; Perry, Judy

    2005-09-01

    This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of handheld activities with respect to enhancing motivation, engagement and self-directed learning. Three additional themes are discussed that provide insight into understanding curricular applicability of Participatory Simulations that suggest a new take on ubiquitous and accessible mobile computing. These themes generally point to the multiple layers of social and cognitive flexibility intrinsic to their design: ease of adaptation to subject-matter content knowledge and curricular integration; facility in attending to teacher-individualized goals; and encouraging the adoption of learner-centered strategies.

  17. Design, implementation, and psychometric analysis of a scoring instrument for simulated pediatric resuscitation: a report from the EXPRESS pediatric investigators.

    PubMed

    Donoghue, Aaron; Ventre, Kathleen; Boulet, John; Brett-Fleegler, Marisa; Nishisaki, Akira; Overly, Frank; Cheng, Adam

    2011-04-01

    Robustly tested instruments for quantifying clinical performance during pediatric resuscitation are lacking. Examining Pediatric Resuscitation Education through Simulation and Scripting Collaborative was established to conduct multicenter trials of simulation education in pediatric resuscitation, evaluating performance with multiple instruments, one of which is the Clinical Performance Tool (CPT). We hypothesize that the CPT will measure clinical performance during simulated pediatric resuscitation in a reliable and valid manner. Using a pediatric resuscitation scenario as a basis, a scoring system was designed based on Pediatric Advanced Life Support algorithms comprising 21 tasks. Each task was scored as follows: task not performed (0 points); task performed partially, incorrectly, or late (1 point); and task performed completely, correctly, and within the recommended time frame (2 points). Study teams at 14 children's hospitals went through the scenario twice (PRE and POST) with an interposed 20-minute debriefing. Both scenarios for each of eight study teams were scored by multiple raters. A generalizability study, based on the PRE scores, was conducted to investigate the sources of measurement error in the CPT total scores. Inter-rater reliability was estimated based on the variance components. Validity was assessed by repeated measures analysis of variance comparing PRE and POST scores. Sixteen resuscitation scenarios were reviewed and scored by seven raters. Inter-rater reliability for the overall CPT score was 0.63. POST scores were found to be significantly improved compared with PRE scores when controlled for within-subject covariance (F1,15 = 4.64, P < 0.05). The variance component ascribable to rater was 2.4%. Reliable and valid measures of performance in simulated pediatric resuscitation can be obtained from the CPT. Future studies should examine the applicability of trichotomous scoring instruments to other clinical scenarios, as well as performance during actual resuscitations.

  18. Robust and efficient overset grid assembly for partitioned unstructured meshes

    NASA Astrophysics Data System (ADS)

    Roget, Beatrice; Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.

  19. Ruminal bacteria and protozoa composition, digestibility, and amino acid profile determined by multiple hydrolysis times.

    PubMed

    Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E

    2017-09-01

    Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. New Generation Live Vaccines against Human Respiratory Syncytial Virus Designed by Reverse Genetics

    PubMed Central

    Collins, Peter L.; Murphy, Brian R.

    2005-01-01

    Development of a live pediatric vaccine against human respiratory syncytial virus (RSV) is complicated by the need to immunize young infants and the difficulty in balancing attenuation and immunogenicity. The ability to introduce desired mutations into infectious virus by reverse genetics provides a method for identifying and designing highly defined attenuating mutations. These can be introduced in combinations as desired to achieve gradations of attenuation. Attenuation is based on several strategies: multiple independent temperature-sensitive point mutations in the polymerase, a temperature-sensitive point mutation in a transcription signal, a set of non–temperature-sensitive mutations involving several genes, deletion of a viral RNA synthesis regulatory protein, and deletion of viral IFN α/β antagonists. The genetic stability of the live vaccine can be increased by judicious choice of mutations. The virus also can be engineered to increase the level of expression of the protective antigens. Protective antigens from antigenically distinct RSV strains can be added or swapped to increase the breadth of coverage. Alternatively, the major RSV protective antigens can be expressed from transcription units added to an attenuated parainfluenza vaccine virus, making a bivalent vaccine. This would obviate the difficulties inherent in the fragility and inefficient in vitro growth of RSV, simplifying vaccine design and use. PMID:16113487

  1. Phase 1 of the First Solar Small Power System Experiment (experimental System No. 1). Volume 1: Technical Studies for Solar Point-focusing, Distributed Collector System, with Energy Conversion at the Collector, Category C

    NASA Technical Reports Server (NTRS)

    Clark, T. B. (Editor)

    1979-01-01

    The technical and economic feasibility of a solar electric power plant for a small community is evaluated and specific system designs for development and demonstration are selected. All systems investigated are defined as point focusing, distributed receiver concepts, with energy conversion at the collector. The preferred system is comprised of multiple parabolic dish concentrators employing Stirling cycle engines for power conversion. The engine, AC generator, cavity receiver, and integral sodium pool boiler/heat transport system are combined in a single package and mounted at the focus of each concentrator. The output of each concentrator is collected by a conventional electrical distribution system which permits grid-connected or stand-alone operation, depending on the storage system selected.

  2. Replacing maladaptive speech with verbal labeling responses: an analysis of generalized responding.

    PubMed Central

    Foxx, R M; Faw, G D; McMorrow, M J; Kyle, M S; Bittle, R G

    1988-01-01

    We taught three mentally handicapped students to answer questions with verbal labels and evaluated the generalized effects of this training on their maladaptive speech (e.g., echolalia) and correct responding to untrained questions. The students received cues-pause-point training on an initial question set followed by generalization assessments on a different set in another setting. Probes were conducted on novel questions in three other settings to determine the strength and spread of the generalization effect. A multiple baseline across subjects design revealed that maladaptive speech was replaced with correct labels (answers) to questions in the training and all generalization settings. These results replicate and extend previous research that suggested that cues-pause-point procedures may be useful in replacing maladaptive speech patterns by teaching students to use their verbal labeling repertoires. PMID:3225258

  3. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  4. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  5. Study of water supply & sanitation practices in India using geographic information systems: some design & other considerations in a village setting.

    PubMed

    Gopal, Srila; Sarkar, Rajiv; Banda, Kalyan; Govindarajan, Jeyanthi; Harijan, B B; Jeyakumar, M B; Mitta, Philip; Sadanala, M E; Selwyn, Tryphena; Suresh, C R; Thomas, V A; Devadason, Pethuru; Kumar, Ranjit; Selvapandian, David; Kang, Gagandeep; Balraj, Vinohar

    2009-03-01

    Availability of clean water and adequate sanitation facilities are of prime importance for limiting diarrhoeal diseases. We examined the water and sanitation facilities of a village in southern India using geographic information system (GIS) tools. Places of residence, water storage and distribution, sewage and places where people in the village defaecated were mapped and drinking water sources were tested for microbial contamination in Nelvoy village, Vellore district, Tamil Nadu. Water in the village was found to be microbiologically unfit for consumption. Analysis using direct observations supplemented by GIS maps revealed poor planning, poor engineering design and lack of policing of the water distribution system causing possible contamination of drinking water from sewage at multiple sites. Until appropriate engineering designs for water supply and sewage disposal to suit individual village needs are made available, point-of-use water disinfection methods could serve as an interim solution.

  6. The design of multi temperature and humidity monitoring system for incubator

    NASA Astrophysics Data System (ADS)

    Yu, Junyu; Xu, Peng; Peng, Zitao; Qiang, Haonan; Shen, Xiaoyan

    2017-01-01

    Currently, there is only one monitor of the temperature and humidity in an incubator, which may cause inaccurate or unreliable data, and even endanger the life safety of the baby. In order to solve this problem,we designed a multi-point temperature and humidity monitoring system for incubators. The system uses the STC12C5A60S2 microcontrollers as the sender core chip which is connected to four AM2321 temperature and humidity sensors. We select STM32F103ZET6 core development board as the receiving end,cooperating with Zigbee wireless transmitting and receiving module to realize data acquisition and transmission. This design can realize remote real-time observation data on the computer by communicating with PC via Ethernet. Prototype tests show that the system can effectively collect and display the information of temperature and humidity of multiple incubators at the same time and there are four monitors in each incubator.

  7. Passive isolation/damping system for the Hubble space telescope reaction wheels

    NASA Technical Reports Server (NTRS)

    Hasha, Martin D.

    1987-01-01

    NASA's Hubble Space Telescope contain large, diffraction limited optics with extraordinary resolution and performance for surpassing existing observatories. The need to reduce structural borne vibration and resultant optical jitter from critical Pointing Control System components, Reaction Wheels, prompted the feasibility investigation and eventual development of a passive isolation system. Alternative design concepts considered were required to meet a host of stringent specifications and pass rigid tests to be successfully verified and integrated into the already built flight vehicle. The final design employs multiple arrays of fluid damped springs that attenuate over a wide spectrum, while confining newly introduced resonances to benign regions of vehicle dynamic response. Overall jitter improvement of roughly a factor of 2 to 3 is attained with this system. The basis, evolution, and performance of the isolation system, specifically discussing design concepts considered, optimization studies, development lessons learned, innovative features, and analytical and ground test verified results are presented.

  8. Statistical Power in Evaluations That Investigate Effects on Multiple Outcomes: A Guide for Researchers

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2018-01-01

    Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…

  9. Concept Design of a Multi-Band Shared Aperture Reflectarray/Reflector Antenna

    NASA Technical Reports Server (NTRS)

    Spence, Thomas; Cooley, Michael E.; Stenger, Peter; Park, Richard; Li, Lihua; Racette, Paul; Heymsfield, Gerald; Mclinden, Matthew

    2016-01-01

    A scalable dual-band (Ka/W) shared-aperture antenna system design has been developed as a proposed solution to meet the needs of the planned NASA Earth Science Aerosol, Clouds, and Ecosystem (ACE) mission. The design is comprised of a compact Cassegrain reflector/reflectarray with a fixed pointing W-band feed and a cross track scanned Ka-band Active Electronically Scanned Array (AESA). Critical Sub-scale prototype testing and flight tests have validated some of the key aspects of this innovative antenna design, including the low loss reflector/reflectarray surface. More recently the science community has expressed interest in a mission that offers the ability to measure precipitation in addition to clouds and aerosols. In this paper we present summaries of multiple designs that explore options for realizing a tri-frequency (Ku/Ka/W), shared-aperture antenna system to meet these science objectives. Design considerations include meeting performance requirements while emphasizing payload size, weight, prime power, and cost. The extensive trades and lessons learned from our previous dual-band ACE system development were utilized as the foundation for this work.

  10. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  11. Concept Design of a Multi-Band Shared Aperture Reflectarray/Reflector Antenna

    NASA Technical Reports Server (NTRS)

    Spence, Thomas; Cooley, Michael; Stenger, Peter; Park, Richard; Li, Lihua; Racette, Paul; Heymsfield, Gerald; Mclinden, Matthew

    2016-01-01

    A scalable dual-band (KaW) shared-aperture antenna system design has been developed as a proposed solution to meet the needs of the planned NASA Earth Science Aerosol, Clouds, and Ecosystem (ACE) mission. The design is comprised of a compact Cassegrain reflector/reflectarray with a fixed pointing W-band feed and a cross track scanned Ka-band Active Electronically Scanned Array (AESA). Critical Sub-scale prototype testing and flight tests have validated some of the key aspects of this innovative antenna design, including the low loss reflector/reflectarray surface.More recently the science community has expressed interest in a mission that offers the ability to measure precipitation in addition to clouds and aerosols. In this paper we present summaries of multiple designs that explore options for realizing a tri-frequency (KuKaW), shared-aperture antenna system to meet these science objectives. Design considerations include meeting performance requirements while emphasizing payload size, weight, prime power, and cost. The extensive trades and lessons learned from our previous dual-band ACE system development were utilized as the foundation for this work.

  12. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  13. Quantifying suspended sediment flux in a mixed-land-use urbanizing watershed using a nested-scale study design.

    PubMed

    Zeiger, Sean; Hubbart, Jason A

    2016-01-15

    Suspended sediment (SS) remains the most pervasive water quality problem globally and yet, despite progress, SS process understanding remains relatively poor in watersheds with mixed-land-use practices. The main objective of the current work was to investigate relationships between suspended sediment and land use types at multiple spatial scales (n=5) using four years of suspended sediment data collected in a representative urbanized mixed-land-use (forest, agriculture, urban) watershed. Water samples were analyzed for SS using a nested-scale experimental watershed study design (n=836 samples×5 gauging sites). Kruskal-Wallis and Dunn's post-hoc multiple comparison tests were used to test for significant differences (CI=95%, p<0.05) in SS levels between gauging sites. Climate extremes (high precipitation/drought) were observed during the study period. Annual maximum SS concentrations exceeded 2387.6 mg/L. Median SS concentrations decreased by 60% from the agricultural headwaters to the rural/urban interface, and increased by 98% as urban land use increased. Multiple linear regression analysis results showed significant relationships between SS, annual total precipitation (positive correlate), forested land use (negative correlate), agricultural land use (negative correlate), and urban land use (negative correlate). Estimated annual SS yields ranged from 16.1 to 313.0 t km(-2) year(-1) mainly due to differences in annual total precipitation. Results highlight the need for additional studies, and point to the need for improved best management practices designed to reduce anthropogenic SS loading in mixed-land-use watersheds. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  15. Estimating Statistical Power When Making Adjustments for Multiple Tests

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2016-01-01

    In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…

  16. Multi-point laser coherent detection system and its application on vibration measurement

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, C.; Xu, Y. J.; Liu, H.; Yan, K.; Guo, M.

    2015-05-01

    Laser Doppler vibrometry (LDV) is a well-known interferometric technique to measure the motions, vibrations and mode shapes of machine components and structures. The drawback of commercial LDV is that it can only offer a pointwise measurement. In order to build up a vibrometric image, a scanning device is normally adopted to scan the laser point in two spatial axes. These scanning laser Doppler vibrometers (SLDV) assume that the measurement conditions remain invariant while multiple and identical, sequential measurements are performed. This assumption makes SLDVs impractical to do measurement on transient events. In this paper, we introduce a new multiple-point laser coherent detection system based on spatial-encoding technology and fiber configuration. A simultaneous vibration measurement on multiple points is realized using a single photodetector. A prototype16-point laser coherent detection system is built and it is applied to measure the vibration of various objects, such as body of a car or a motorcycle when engine is on and under shock tests. The results show the prospect of multi-point laser coherent detection system in the area of nondestructive test and precise dynamic measurement.

  17. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.

  18. Patient Preferences for Attributes of Multiple Sclerosis Disease-Modifying Therapies

    PubMed Central

    Loucks, Aimee; Gipson, Gregory; Zhong, Lixian; Bui, Christine; Miller, Elizabeth; Owen, Mary; Pelletier, Daniel; Goodin, Douglas; Waubant, Emmanuelle; McCulloch, Charles E.

    2015-01-01

    Background: Timely individualized treatment is essential to improving relapsing-remitting multiple sclerosis (RRMS) patient health outcomes, yet little is known about how patients make treatment decisions. We sought to evaluate RRMS patient preferences for risks and benefits of treatment. Methods: Fifty patients with RRMS completed conjoint analysis surveys with 16 hypothetical disease-modifying therapy (DMT) medication profiles developed using a fractional factorial design. Medication profiles were assigned preference ratings from 0 (not acceptable) to 10 (most favorable). Medication attributes included a range of benefits, adverse effects, administration routes, and market durations. Analytical models used linear mixed-effects regression. Results: Participants showed the highest preference for medication profiles that would improve their symptoms (β = 0.81–1.03, P < .001), not a proven DMT outcome. Preventing relapses, the main clinical trial outcome, was not associated with significant preferences (P = .35). Each year of preventing magnetic resonance imaging changes and disease symptom progression showed DMT preferences of 0.17 point (β = 0.17, P = .002) and 0.12 point (β = 0.12, P < .001), respectively. Daily oral administration was preferred over all parenteral routes (P < .001). A 1% increase in death or severe disability decreased relative DMT preference by 1.15 points (P < .001). Conclusions: Patient preference focused on symptoms and prevention of progression but not on relapse prevention, the proven drug outcome. Patients were willing to accept some level of serious risk for certain types and amounts of benefits, and they strongly preferred daily oral administration over all other options. PMID:25892977

  19. OpT2mise: a randomized controlled trial to compare insulin pump therapy with multiple daily injections in the treatment of type 2 diabetes-research design and methods.

    PubMed

    Aronson, Ronnie; Cohen, Ohad; Conget, Ignacio; Runzis, Sarah; Castaneda, Javier; de Portu, Simona; Lee, Scott; Reznik, Yves

    2014-07-01

    In insulin-requiring type 2 diabetes patients, current insulin therapy approaches such as basal-alone or basal-bolus multiple daily injections (MDI) have not consistently provided achievement of optimal glycemic control. Previous studies have suggested a potential benefit of continuous subcutaneous insulin infusion (CSII) in these patients. The OpT2mise study is a multicenter, randomized, trial comparing CSII with MDI in a large cohort of subjects with evidence of persistent hyperglycemia despite previous MDI therapy. Subjects were enrolled into a run-in period for optimization of their MDI insulin regimen. Subjects showing persistent hyperglycemia (glycated hemoglobin [HbA1c] ≥8% and ≤12%) were then randomly assigned to CSII or continuing an MDI regimen for a 6-month phase followed by a single crossover of the MDI arm, switching to CSII. The primary end point is the between-group difference in mean change in HbA1c from baseline to 6 months. Secondary end points include change in mean 24-h glucose values, area under the curve and time spent in hypoglycemia and hyperglycemia, measures of glycemic excursions, change in postprandial hyperglycemia, and evaluation of treatment satisfaction. Safety end points include hypoglycemia, hospital admissions, and emergency room visits. When subject enrollment was completed in May 2013, 495 subjects had been enrolled in the study. The study completion for the primary end point is expected in January 2014. OpT2mise will represent the largest studied homogeneous cohort of type 2 diabetes patients with persistent hyperglycemia despite optimized MDI therapy. OpT2mise will help define the role of CSII in insulin intensification and define its safety, rate of hypoglycemia, patient adherence, and patient satisfaction.

  20. STAR 3 randomized controlled trial to compare sensor-augmented insulin pump therapy with multiple daily injections in the treatment of type 1 diabetes: research design, methods, and baseline characteristics of enrolled subjects.

    PubMed

    Davis, Stephen N; Horton, Edward S; Battelino, Tadej; Rubin, Richard R; Schulman, Kevin A; Tamborlane, William V

    2010-04-01

    Sensor-augmented pump therapy (SAPT) integrates real-time continuous glucose monitoring (RT-CGM) with continuous subcutaneous insulin infusion (CSII) and offers an alternative to multiple daily injections (MDI). Previous studies provide evidence that SAPT may improve clinical outcomes among people with type 1 diabetes. Sensor-Augmented Pump Therapy for A1c Reduction (STAR) 3 is a multicenter randomized controlled trial comparing the efficacy of SAPT to that of MDI in subjects with type 1 diabetes. Subjects were randomized to either continue with MDI or transition to SAPT for 1 year. Subjects in the MDI cohort were allowed to transition to SAPT for 6 months after completion of the study. SAPT subjects who completed the study were also allowed to continue for 6 months. The primary end point was the difference between treatment groups in change in hemoglobin A1c (HbA1c) percentage from baseline to 1 year of treatment. Secondary end points included percentage of subjects with HbA1c < or =7% and without severe hypoglycemia, as well as area under the curve of time spent in normal glycemic ranges. Tertiary end points include percentage of subjects with HbA1c < or =7%, key safety end points, user satisfaction, and responses on standardized assessments. A total of 495 subjects were enrolled, and the baseline characteristics similar between the SAPT and MDI groups. Study completion is anticipated in June 2010. Results of this randomized controlled trial should help establish whether an integrated RT-CGM and CSII system benefits patients with type 1 diabetes more than MDI.

  1. Advanced integrated life support system update

    NASA Technical Reports Server (NTRS)

    Whitley, Phillip E.

    1994-01-01

    The Advanced Integrated Life Support System Program (AILSS) is an advanced development effort to integrate the life support and protection requirements using the U.S. Navy's fighter/attack mission as a starting point. The goal of AILSS is to optimally mate protection from altitude, acceleration, chemical/biological agent, thermal environment (hot, cold, and cold water immersion) stress as well as mission enhancement through improved restraint, night vision, and head-mounted reticules and displays to ensure mission capability. The primary emphasis to date has been to establish garment design requirements and tradeoffs for protection. Here the garment and the human interface are treated as a system. Twelve state-off-the-art concepts from government and industry were evaluated for design versus performance. On the basis of a combination of centrifuge, thermal manikin data, thermal modeling, and mobility studies, some key design parameters have been determined. Future efforts will concentrate on the integration of protection through garment design and the use of a single layer, multiple function concept to streamline the garment system.

  2. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  3. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  4. Optimization and development of stable w/o/w cosmetic multiple emulsions by means of the Quality by Design approach.

    PubMed

    Kovács, A; Erős, I; Csóka, I

    2016-04-01

    The aim of our present work was to develop stable water-in-oil-in-water (w/o/w) cosmetic multiple emulsions that are proper for cosmetic use and can also be applied on the skin as pharmaceutical vehicles by means of Quality by Design (QbD) concept. This product design concept consists of a risk assessment step and also the 'predetermination' of the critical material attributes and process parameters of a stable multiple emulsion system. We have set up the hypothesis that the stability of multiple emulsions can be improved by the development based on such systematic planning - making a map of critical product parameters - so their industrial usage can be increased. The risk assessment and the determination of critical physical-chemical stability parameters of w/o/w multiple emulsions to define critical control points were performed by means of quality tools and the leanqbd(™) (QbD Works LLC, Fremont, CA, U.S.A.) software. Critical materials and process parameters: Based on the results of preformulation experiments, three factors, namely entrapped active agent, preparation methodology and shear rate, were found to be highly critical factors for critical quality attributes (CQAs) and for stability, whereas the nature of oil was found a medium level risk factor. The results of the risk assessment are the following: (i) droplet structure and size distribution should be evaluated together to be able to predict the stability issues, (ii) the presence of entrapped active agents had a great impact on droplet structure, (iii) the viscosity curves represent the structural changes during storage, if the decrease in relative viscosity is >15% the emulsion disintegrates, and (iv) it is enough to use the shear rate between 34g and 116g relative centrifugal force (RCF). CQAs: By risk assessment, we discovered that four factors should be considered to be high-risk variables as compared to others: droplet size, droplet structure, viscosity and multiple character were found to be highly critical attributes. The preformulation experiment is the part of a development plan. On the basis of these results, the control strategy can be defined and a stable multiple emulsion can be ensured that meets the relevant stakeholders' quality expectations. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  5. Case-control analysis in highway safety: Accounting for sites with multiple crashes.

    PubMed

    Gross, Frank

    2013-12-01

    There is an increased interest in the use of epidemiological methods in highway safety analysis. The case-control and cohort methods are commonly used in the epidemiological field to identify risk factors and quantify the risk or odds of disease given certain characteristics and factors related to an individual. This same concept can be applied to highway safety where the entity of interest is a roadway segment or intersection (rather than a person) and the risk factors of interest are the operational and geometric characteristics of a given roadway. One criticism of the use of these methods in highway safety is that they have not accounted for the difference between sites with single and multiple crashes. In the medical field, a disease either occurs or it does not; multiple occurrences are generally not an issue. In the highway safety field, it is necessary to evaluate the safety of a given site while accounting for multiple crashes. Otherwise, the analysis may underestimate the safety effects of a given factor. This paper explores the use of the case-control method in highway safety and two variations to account for sites with multiple crashes. Specifically, the paper presents two alternative methods for defining cases in a case-control study and compares the results in a case study. The first alternative defines a separate case for each crash in a given study period, thereby increasing the weight of the associated roadway characteristics in the analysis. The second alternative defines entire crash categories as cases (sites with one crash, sites with two crashes, etc.) and analyzes each group separately in comparison to sites with no crashes. The results are also compared to a "typical" case-control application, where the cases are simply defined as any entity that experiences at least one crash and controls are those entities without a crash in a given period. In a "typical" case-control design, the attributes associated with single-crash segments are weighted the same as the attributes of segments with multiple crashes. The results support the hypothesis that the "typical" case-control design may underestimate the safety effects of a given factor compared to methods that account for sites with multiple crashes. Compared to the first alternative case definition (where multiple crash segments represent multiple cases) the results from the "typical" case-control design are less pronounced (i.e., closer to unity). The second alternative (where case definitions are constructed for various crash categories and analyzed separately) provides further evidence that sites with single and multiple crashes should not be grouped together in a case-control analysis. This paper indicates a clear need to differentiate sites with single and multiple crashes in a case-control analysis. While the results suggest that sites with multiple crashes can be accounted for using a case-control design, further research is needed to determine the optimal method for addressing this issue. This paper provides a starting point for that research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Absolute Points for Multiple Assignment Problems

    ERIC Educational Resources Information Center

    Adlakha, V.; Kowalski, K.

    2006-01-01

    An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…

  7. System for Training Aviation Regulations (STAR): Using Multiple Vantage Points To Learn Complex Information through Scenario-Based Instruction and Multimedia Techniques.

    ERIC Educational Resources Information Center

    Chandler, Terrell N.

    1996-01-01

    The System for Training of Aviation Regulations (STAR) provides comprehensive training in understanding and applying Federal aviation regulations. STAR gives multiple vantage points with multimedia presentations and storytelling within four categories of learning environments: overviews, scenarios, challenges, and resources. Discusses the…

  8. The Registration and Segmentation of Heterogeneous Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Al-Durgham, Mohannad M.

    Light Detection And Ranging (LiDAR) mapping has been emerging over the past few years as a mainstream tool for the dense acquisition of three dimensional point data. Besides the conventional mapping missions, LiDAR systems have proven to be very useful for a wide spectrum of applications such as forestry, structural deformation analysis, urban mapping, and reverse engineering. The wide application scope of LiDAR lead to the development of many laser scanning technologies that are mountable on multiple platforms (i.e., airborne, mobile terrestrial, and tripod mounted), this caused variations in the characteristics and quality of the generated point clouds. As a result of the increased popularity and diversity of laser scanners, one should address the heterogeneous LiDAR data post processing (i.e., registration and segmentation) problems adequately. Current LiDAR integration techniques do not take into account the varying nature of laser scans originating from various platforms. In this dissertation, the author proposes a methodology designed particularly for the registration and segmentation of heterogeneous LiDAR data. A data characterization and filtering step is proposed to populate the points' attributes and remove non-planar LiDAR points. Then, a modified version of the Iterative Closest Point (ICP), denoted by the Iterative Closest Projected Point (ICPP) is designed for the registration of heterogeneous scans to remove any misalignments between overlapping strips. Next, a region-growing-based heterogeneous segmentation algorithm is developed to ensure the proper extraction of planar segments from the point clouds. Validation experiments show that the proposed heterogeneous registration can successfully align airborne and terrestrial datasets despite the great differences in their point density and their noise level. In addition, similar testes have been conducted to examine the heterogeneous segmentation and it is shown that one is able to identify common planar features in airborne and terrestrial data without resampling or manipulating the data in any way. The work presented in this dissertation provides a framework for the registration and segmentation of airborne and terrestrial laser scans which has a positive impact on the completeness of the scanned feature. Therefore, the derived products from these point clouds have higher accuracy as seen in the full manuscript.

  9. Designs for Testing Group-Based Interventions with Limited Numbers of Social Units: The Dynamic Wait-Listed and Regression Point Displacement Designs.

    PubMed

    Wyman, Peter A; Henry, David; Knoblauch, Shannon; Brown, C Hendricks

    2015-10-01

    The dynamic wait-listed design (DWLD) and regression point displacement design (RPDD) address several challenges in evaluating group-based interventions when there is a limited number of groups. Both DWLD and RPDD utilize efficiencies that increase statistical power and can enhance balance between community needs and research priorities. The DWLD blocks on more time units than traditional wait-listed designs, thereby increasing the proportion of a study period during which intervention and control conditions can be compared, and can also improve logistics of implementing intervention across multiple sites and strengthen fidelity. We discuss DWLDs in the larger context of roll-out randomized designs and compare it with its cousin the Stepped Wedge design. The RPDD uses archival data on the population of settings from which intervention unit(s) are selected to create expected posttest scores for units receiving intervention, to which actual posttest scores are compared. High pretest-posttest correlations give the RPDD statistical power for assessing intervention impact even when one or a few settings receive intervention. RPDD works best when archival data are available over a number of years prior to and following intervention. If intervention units were not randomly selected, propensity scores can be used to control for non-random selection factors. Examples are provided of the DWLD and RPDD used to evaluate, respectively, suicide prevention training (QPR) in 32 schools and a violence prevention program (CeaseFire) in two Chicago police districts over a 10-year period. How DWLD and RPDD address common threats to internal and external validity, as well as their limitations, are discussed.

  10. Designs for testing group-based interventions with limited numbers of social units: The dynamic wait-listed and regression point displacement designs

    PubMed Central

    Wyman, Peter A.; Brown, C. Hendricks

    2015-01-01

    The dynamic wait-listed design (DWLD) and regression point displacement design (RPDD) address several challenges in evaluating group-based interventions when there is a limited number of groups. Both DWLD and RPDD utilize efficiencies that increase statistical power and can enhance balance between community needs and research priorities. The DWLD blocks on more time units than traditional wait-listed designs, thereby increasing the proportion of a study period during which intervention and control conditions can be compared, and can also improve logistics of implementing intervention across multiple sites and strengthen fidelity. We discuss DWLDs in the larger context of roll-out randomized designs and compare it with its cousin the Stepped Wedge design. The RPDD uses archival data on the population of settings from which intervention unit(s) are selected to create expected posttest scores for units receiving intervention, to which actual posttest scores are compared. High pretest-posttest correlations give the RPDD statistical power for assessing intervention impact even when one or a few settings receive intervention. RPDD works best when archival data are available over a number of years prior to and following intervention. If intervention units were not randomly selected, propensity scores can be used to control for nonrandom selection factors. Examples are provided of the DWLD and RPDD used to evaluate, respectively, suicide prevention training (QPR) in 32 schools and a violence prevention program (CeaseFire) in 2 Chicago police districts over a 10-year period. How DWLD and RPDD address common threats to internal and external validity, as well as their limitations, are discussed. PMID:25481512

  11. Global design of satellite constellations: a multi-criteria performance comparison of classical walker patterns and new design patterns

    NASA Astrophysics Data System (ADS)

    Lansard, Erick; Frayssinhes, Eric; Palmade, Jean-Luc

    Basically, the problem of designing a multisatellite constellation exhibits a lot of parameters with many possible combinations: total number of satellites, orbital parameters of each individual satellite, number of orbital planes, number of satellites in each plane, spacings between satellites of each plane, spacings between orbital planes, relative phasings between consecutive orbital planes. Hopefully, some authors have theoretically solved this complex problem under simplified assumptions: the permanent (or continuous) coverage by a single and multiple satellites of the whole Earth and zonal areas has been entirely solved from a pure geometrical point of view. These solutions exhibit strong symmetry properties (e.g. Walker, Ballard, Rider, Draim constellations): altitude and inclination are identical, orbital planes and satellites are regularly spaced, etc. The problem with such constellations is their oversimplified and restricted geometrical assumption. In fact, the evaluation function which is used implicitly only takes into account the point-to-point visibility between users and satellites and does not deal with very important constraints and considerations that become mandatory when designing a real satellite system (e.g. robustness to satellite failures, total system cost, common view between satellites and ground stations, service availability and satellite reliability, launch and early operations phase, production constraints, etc.). An original and global methodology relying on a powerful optimization tool based on genetic algorithms has been developed at ALCATEL ESPACE. In this approach, symmetrical constellations can be used as initial conditions of the optimization process together with specific evaluation functions. A multi-criteria performance analysis is conducted and presented here in a parametric way in order to identify and evaluate the main sensitive parameters. Quantitative results are given for three examples in the fields of navigation, telecommunication and multimedia satellite systems. In particular, a new design pattern with very efficient properties in terms of robustness to satellite failures is presented and compared with classical Walker patterns.

  12. Active disturbance rejection controller of fine tracking system for free space optical communication

    NASA Astrophysics Data System (ADS)

    Cui, Ning; Liu, Yang; Chen, Xinglin; Wang, Yan

    2013-08-01

    Free space optical communication is one of the best approaches in future communications. Laser beam's acquisition, pointing and tracking are crucial technologies of free space optical communication. Fine tracking system is important component of APT (acquisition, pointing and tracking) system. It cooperates with the coarse pointing system in executing the APT mission. Satellite platform vibration and disturbance, which reduce received optical power, increase bit error rate and affect seriously the natural performance of laser communication. For the characteristic of satellite platform, an active disturbance rejection controller was designed to reduce the vibration and disturbance. There are three major contributions in the paper. Firstly, the effects of vibration on the inter satellite optical communications were analyzed, and the reasons and characters of vibration of the satellite platform were summarized. The amplitude-frequency response of a filter was designed according to the power spectral density of platform vibration of SILEX (Semiconductor Inter-satellite Laser Experiment), and then the signals of platform vibration were generated by filtering white Gaussian noise using the filter. Secondly, the fast steering mirror is a key component of the fine tracking system for optical communication. The mechanical design and model analysis was made to the tip/tilt mirror driven by the piezoelectric actuator and transmitted by the flexure hinge. The transfer function of the fast steering mirror, camera, D/A data acquisition card was established, and the theory model of transfer function of this system was further obtained. Finally, an active disturbance rejection control method is developed, multiple parallel extended state observers were designed for estimation of unknown dynamics and external disturbance, and the estimated states were used for nonlinear feedback control and compensation to improve system performance. The simulation results show that the designed controller not only accurately estimates and compensates the disturbances, but also realizes the robustness to estimation of unknown dynamics. The controller can satisfy the requirement of fine tracking accuracy for free space optical communication system.

  13. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  14. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  15. Clinical Outcome Assessments: Use of Normative Data in a Pediatric Rare Disease.

    PubMed

    Phillips, Dawn; Leiro, Beth

    2018-05-01

    Pediatric rare diseases present unique challenges in clinical trial design and in selection of clinical outcome assessments (COAs) used to support claims in medical product labeling. COAs that discriminate level of function relative to a normative sample are particularly important in the pediatric rare disease setting because the literature is often void of natural history data. Pediatric rare disease clinical trials will often include a wide age distribution. Gross and fine motor skills, communication, cognition, and independence in activities of daily living vary by age, and it may be difficult to distinguish between treatment effect and change due to developmental maturation. Asfotase alfa was granted breakthrough therapy designation and subsequently approved for the treatment of hypophosphatasia (HPP; a genetic metabolic musculoskeletal disorder) and is used in this discussion to illustrate COA selection in a pediatric rare disease. Multiple COAs with normative data in HPP clinical trials for asfotase alfa are presented. The assessment instruments included the Bayley Scales of Infant and Toddler Development-Third Edition, the Bruininks-Oseretsky Test of Motor Proficiency, Second Edition, the Childhood Health Assessment Questionnaire, the Pediatric Outcomes Data Collection Instrument, handheld dynamometry, the 6-minute walk test, and the Modified Performance-Oriented Mobility Assessment-Gait scale. Multiple end points were required to adequately capture the impact of asfotase alfa treatment on the multiple systems affected in HPP. These data illustrate the importance of using multiple COAs that provide normative data and to use COAs early in the drug development process for rare pediatric disease. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Nonparametric relevance-shifted multiple testing procedures for the analysis of high-dimensional multivariate data with small sample sizes.

    PubMed

    Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried

    2008-01-27

    In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.

  17. Initial postbuckling analysis of elastoplastic thin-shear structures

    NASA Technical Reports Server (NTRS)

    Carnoy, E. G.; Panosyan, G.

    1984-01-01

    The design of thin shell structures with respect to elastoplastic buckling requires an extended analysis of the influence of initial imperfections. For conservative design, the most critical defect should be assumed with the maximum allowable magnitude. This defect is closely related to the initial postbuckling behavior. An algorithm is given for the quasi-static analysis of the postbuckling behavior of structures that exhibit multiple buckling points. the algorithm based upon an energy criterion allows the computation of the critical perturbation which will be employed for the definition of the critical defect. For computational efficiency, the algorithm uses the reduced basis technique with automatic update of the modal basis. The method is applied to the axisymmetric buckling of cylindrical shells under axial compression, and conclusions are given for future research.

  18. A real-time hybrid neuron network for highly parallel cognitive systems.

    PubMed

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  19. Systems and Methods for Imaging of Falling Objects

    NASA Technical Reports Server (NTRS)

    Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)

    2014-01-01

    Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.

  20. MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.

    PubMed

    Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel

    2016-01-01

    We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.

  1. Drift-free solar sail formations in elliptical Sun-synchronous orbits

    NASA Astrophysics Data System (ADS)

    Parsay, Khashayar; Schaub, Hanspeter

    2017-10-01

    To study the spatial and temporal variations of plasma in the highly dynamic environment of the magnetosphere, multiple spacecraft must fly in a formation. The objective for this study is to investigate the feasibility of solar sail formation flying in the Earth-centered, Sun-synchronous orbit regime. The focus of this effort is to enable formation flying for a group of solar sails that maintain a nominally fixed Sun-pointing attitude during formation flight, solely for the purpose of precessing their orbit apse lines Sun-synchronously. A fixed-attitude solar sail formation is motivated by the difficulties in the simultaneous control of orbit and attitude in flying solar sails. First, the secular rates of the orbital elements resulting from the effects of solar radiation pressure (SRP) are determined using averaging theory for a Sun-pointing attitude sail. These averaged rates are used to analytically derive the first-order necessary conditions for a drift-free solar sail formation in Sun-synchronous orbits, assuming a fixed Sun-pointing orientation for each sail in formation. The validity of the first-order necessary conditions are illustrated by designing quasi-periodic relative motions. Next, nonlinear programming is applied to design truly drift-free two-craft solar sail formations. Lastly, analytic expressions are derived to determine the long-term dynamics and sensitivity of the formation with respect to constant attitude errors, uncertainty in orbital elements, and uncertainty in a sail's characteristic acceleration.

  2. Acoustic field in unsteady moving media

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Maestrello, L.; Ting, L.

    1995-01-01

    In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.

  3. Optimization of a therapeutic protocol for intravenous injection of human mesenchymal stem cells after cerebral ischemia in adult rats.

    PubMed

    Omori, Yoshinori; Honmou, Osamu; Harada, Kuniaki; Suzuki, Junpei; Houkin, Kiyohiro; Kocsis, Jeffery D

    2008-10-21

    The systemic injection of human mesenchymal stem cells (hMSCs) prepared from adult bone marrow has therapeutic benefits after cerebral artery occlusion in rats, and may have multiple therapeutic effects at various sites and times within the lesion as the cells respond to a particular pathological microenvironment. However, the comparative therapeutic benefits of multiple injections of hMSCs at different time points after cerebral artery occlusion in rats remain unclear. In this study, we induced middle cerebral artery occlusion (MCAO) in rats using intra-luminal vascular occlusion, and infused hMSCs intravenously at a single 6 h time point (low and high cell doses) and various multiple time points after MCAO. From MRI analyses lesion volume was reduced in all hMSC cell injection groups as compared to serum alone injections. However, the greatest therapeutic benefit was achieved following a single high cell dose injection at 6 h post-MCAO, rather than multiple lower cell infusions over multiple time points. Three-dimensional analysis of capillary vessels in the lesion indicated that the capillary volume was equally increased in all of the cell-injected groups. Thus, differences in functional outcome in the hMSC transplantation subgroups are not likely the result of differences in angiogenesis, but rather from differences in neuroprotective effects.

  4. Consistent and reproducible positioning in longitudinal imaging for phenotyping genetically modified swine

    NASA Astrophysics Data System (ADS)

    Hammond, Emily; Dilger, Samantha K. N.; Stoyles, Nicholas; Judisch, Alexandra; Morgan, John; Sieren, Jessica C.

    2015-03-01

    Recent growth of genetic disease models in swine has presented the opportunity to advance translation of developed imaging protocols, while characterizing the genotype to phenotype relationship. Repeated imaging with multiple clinical modalities provides non-invasive detection, diagnosis, and monitoring of disease to accomplish these goals; however, longitudinal scanning requires repeatable and reproducible positioning of the animals. A modular positioning unit was designed to provide a fixed, stable base for the anesthetized animal through transit and imaging. Post ventilation and sedation, animals were placed supine in the unit and monitored for consistent vitals. Comprehensive imaging was performed with a computed tomography (CT) chest-abdomen-pelvis scan at each screening time point. Longitudinal images were rigidly registered, accounting for rotation, translation, and anisotropic scaling, and the skeleton was isolated using a basic thresholding algorithm. Assessment of alignment was quantified via eleven pairs of corresponding points on the skeleton with the first time point as the reference. Results were obtained with five animals over five screening time points. The developed unit aided in skeletal alignment within an average of 13.13 +/- 6.7 mm for all five subjects providing a strong foundation for developing qualitative and quantitative methods of disease tracking.

  5. Multivariate meta-analysis of prognostic factor studies with multiple cut-points and/or methods of measurement.

    PubMed

    Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P

    2015-07-30

    A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  6. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  7. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  8. Ocular stability and set-point adaptation

    PubMed Central

    Jareonsettasin, P.; Leigh, R. J.

    2017-01-01

    A fundamental challenge to the brain is how to prevent intrusive movements when quiet is needed. Unwanted limb movements such as tremor impair fine motor control and unwanted eye drifts such as nystagmus impair vision. A stable platform is also necessary to launch accurate movements. Accordingly, nature has designed control systems with agonist (excitation) and antagonist (inhibition) muscle pairs functioning in push–pull, around a steady level of balanced tonic activity, the set-point. Sensory information can be organized similarly, as in the vestibulo-ocular reflex, which generates eye movements that compensate for head movements. The semicircular canals, working in coplanar pairs, one in each labyrinth, are reciprocally excited and inhibited as they transduce head rotations. The relative change in activity is relayed to the vestibular nuclei, which operate around a set-point of stable balanced activity. When a pathological imbalance occurs, producing unwanted nystagmus without head movement, an adaptive mechanism restores the proper set-point and eliminates the nystagmus. Here we used 90 min of continuous 7 T magnetic field labyrinthine stimulation (MVS) in normal humans to produce sustained nystagmus simulating vestibular imbalance. We identified multiple time-scale processes towards a new zero set-point showing that MVS is an excellent paradigm to investigate the neurobiology of set-point adaptation. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’. PMID:28242733

  9. Assimilating Flow Data into Complex Multiple-Point Statistical Facies Models Using Pilot Points Method

    NASA Astrophysics Data System (ADS)

    Ma, W.; Jafarpour, B.

    2017-12-01

    We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  10. Impact of a counter-rotating planetary rotation system on thin-film thickness and uniformity

    DOE PAGES

    Oliver, J. B.

    2017-06-12

    Planetary rotation systems incorporating forward- and counter-rotating planets are used as a means of increasing coating-system capacity for large oblong substrates. Comparisons of planetary motion for the two types of rotating systems are presented based on point tracking for multiple revolutions, as well as comparisons of quantitative thickness and uniformity. Counter-rotation system geometry is shown to result in differences in thin-film thickness relative to standard planetary rotation for precision optical coatings. As a result, this systematic error in thin-film thickness will reduce deposition yields for sensitive coating designs.

  11. Impact of a counter-rotating planetary rotation system on thin-film thickness and uniformity.

    PubMed

    Oliver, J B

    2017-06-20

    Planetary rotation systems incorporating forward- and counter-rotating planets are used as a means of increasing coating-system capacity for large oblong substrates. Comparisons of planetary motion for the two types of rotating systems are presented based on point tracking for multiple revolutions as well as comparisons of quantitative thickness and uniformity. Counter-rotation system geometry is shown to result in differences in thin-film thickness relative to standard planetary rotation for precision optical coatings. This systematic error in thin-film thickness will reduce deposition yields for sensitive coating designs.

  12. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  13. Energy spectrum of medium energy gamma-rays from the galactic center region. [experimental design

    NASA Technical Reports Server (NTRS)

    Palmeira, R. A. R.; Ramanujarao, K.; Dutra, S. L. G.; Bertsch, D. L.; Kniffen, D. A.; Morris, D. J.

    1978-01-01

    A balloon-borne magnetic core digitized spark chamber with two assemblies of spark-chambers above and below the scintillation counters was used to measure the medium energy gamma ray flux from the galactic center region. Gamma ray calculations are based on the multiple scattering of the pair electrons in 15 aluminum plates interleaved in the spark chamber modules. Counting rates determined during ascent and at ceiling indicate the presence of diffuse component in this energy range. Preliminary results give an integral flux between 15 and 70 MeV compared to the differential points in other results.

  14. Impact of a counter-rotating planetary rotation system on thin-film thickness and uniformity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, J. B.

    Planetary rotation systems incorporating forward- and counter-rotating planets are used as a means of increasing coating-system capacity for large oblong substrates. Comparisons of planetary motion for the two types of rotating systems are presented based on point tracking for multiple revolutions, as well as comparisons of quantitative thickness and uniformity. Counter-rotation system geometry is shown to result in differences in thin-film thickness relative to standard planetary rotation for precision optical coatings. As a result, this systematic error in thin-film thickness will reduce deposition yields for sensitive coating designs.

  15. Studying Petrophysical and Geomechanical Properties of Utica Point-Pleasant Shale and its Variations Across the Northern Appalachian Basin

    NASA Astrophysics Data System (ADS)

    Raziperchikolaee, S.; Kelley, M. E.; Burchwell, A.

    2017-12-01

    Understanding petrophysical and geomechanical parameters of shale formations and their variations across the basin are necessary to optimize the design of a hydraulic fracturing program aimed at enhancing long term oil/gas production from unconventional wells. Dipole sonic logging data (compressional-wave and shear-wave slowness) from multiple wells across the study area, coupled with formation bulk density log data, were used to calculate dynamic elastic parameters, including shear modulus, bulk modulus, Poisson's ratio, and Young's modulus for the shale formations. The individual-well data were aggregated into a single histogram for each parameter to gain an understanding of the variation in the properties (including brittleness) of the Utica Point-Pleasant formations across the entire study area. A crossplot of the compressional velocity and bulk density and a crossplot between the compressional velocity, the shear velocity, and depth of the measurement were used for a high level petrophysical characterization of the Utica Point-Pleasant. Detailed interpretation of drilling induced fractures recorded in image logs, and an analysis of shear wave anisotropy using multi-receiver sonic logs were also performed. Orientation of drilling induced fractures was measured to determine the maximum horizontal stress azimuth. Also, an analysis of shear wave anisotropy to predict stress anisotropy around the wellbore was performed to determine the direction of maximum horizontal stress. Our study shows how the detailed interpretation of borehole breakouts, drilling induced fractures, and sonic wave data can be used to reduce uncertainty and produce a better hydraulic fracturing design in the Utica Point Pleasant formations across the northern Appalachian Basin region of Ohio.

  16. Integrating statistical and clinical research elements in intervention-related grant applications: summary from an NIMH workshop.

    PubMed

    Sherrill, Joel T; Sommers, David I; Nierenberg, Andrew A; Leon, Andrew C; Arndt, Stephan; Bandeen-Roche, Karen; Greenhouse, Joel; Guthrie, Donald; Normand, Sharon-Lise; Phillips, Katharine A; Shear, M Katherine; Woolson, Robert

    2009-01-01

    The authors summarize points for consideration generated in a National Institute of Mental Health (NIMH) workshop convened to provide an opportunity for reviewers from different disciplines-specifically clinical researchers and statisticians-to discuss how their differing and complementary expertise can be well integrated in the review of intervention-related grant applications. A 1-day workshop was convened in October, 2004. The workshop featured panel presentations on key topics followed by interactive discussion. This article summarizes the workshop and subsequent discussions, which centered on topics including weighting the statistics/data analysis elements of an application in the assessment of the application's overall merit; the level of statistical sophistication appropriate to different stages of research and for different funding mechanisms; some key considerations in the design and analysis portions of applications; appropriate statistical methods for addressing essential questions posed by an application; and the role of the statistician in the application's development, study conduct, and interpretation and dissemination of results. A number of key elements crucial to the construction and review of grant applications were identified. It was acknowledged that intervention-related studies unavoidably involve trade-offs. Reviewers are helped when applications acknowledge such trade-offs and provide good rationale for their choices. Clear linkage among the design, aims, hypotheses, and data analysis plan and avoidance of disconnections among these elements also strengthens applications. The authors identify multiple points to consider when constructing intervention-related grant applications. The points are presented here as questions and do not reflect institute policy or comprise a list of best practices, but rather represent points for consideration.

  17. Design and Outcomes of a Comprehensive Care Experience Level System to Evaluate and Monitor Dental Students' Clinical Progress.

    PubMed

    Teich, Sorin T; Roperto, Renato; Alonso, Aurelio A; Lang, Lisa A

    2016-06-01

    A Comprehensive Care Experience Level (CCEL) system that is aligned with Commission on Dental Accreditation (CODA) standards, promotes comprehensive care and prevention, and addresses flaws observed in previous Relative Value Units (RVU)-based programs has been implemented at the School of Dental Medicine, Case Western Reserve University since 2011. The purpose of this article is to report on the design, implementation, and preliminary outcomes of this novel clinical evaluation system. With the development of the CCEL concept, it was decided not to award points for procedures performed on competency exams. The reason behind this decision was that exams are not learning opportunities and are evaluated with summative tools. To determine reasonable alternative requirements, production data from previous classes were gathered and translated into CCEL points. These RVU points had been granted selectively only for restorative procedures completed after the initial preparation stage of the treatment plan, and achievement of the required levels was checked at multiple points during the clinical curriculum. Results of the CCEL system showed that low performing students increased their productivity, overall production at graduation increased significantly, and fluoride utilization to prevent caries rose by an order of magnitude over the RVU system. The CCEL program also allowed early identification and remediation of students having difficulty in the clinic. This successful implementation suggests that the CCEL concept has the potential for widespread adoption by dental schools. This method also can be used as a behavior modification tool to achieve specific patient care or clinical educational goals as illustrated by the way caries prevention was promoted through the program.

  18. Transceiver Design for CMUT-Based Super-Resolution Ultrasound Imaging.

    PubMed

    Behnamfar, Parisa; Molavi, Reza; Mirabbasi, Shahriar

    2016-04-01

    A recently introduced structure for the capacitive micromachined ultrasonic transducers (CMUTs) has focused on the applications of the asymmetric mode of vibration and has shown promising results in construction of super-resolution ultrasound images. This paper presents the first implementation and experimental results of a transceiver circuit to interface such CMUT structures. The multiple input/multiple output receiver in this work supports both fundamental and asymmetric modes of operation and includes transimpedance amplifiers and low-power variable-gain stages. These circuit blocks are designed considering the trade-offs between gain, input impedance, noise, linearity and power consumption. The high-voltage transmitter can generate pulse voltages up to 60 V while occupying a considerably small area. The overall circuit is designed and laid out in a 0.35 μm CMOS process and a four-channel transceiver occupies 0.86 × 0.38 mm(2). The prototype chip is characterized in both electrical and mechanical domains. Measurement results show that each receiver channel has a nominal gain of 110 dBΩ with a 3 dB bandwidth of 9 MHz while consuming 1.02 mW from a 3.3 V supply. The receiver is also highly linear, with 1 dB compression point of minimum 1.05 V which is considerably higher than the previously reported designs. The transmitter consumes 98.1 mW from a 30 V supply while generating 1.38 MHz, 30 V pulses. The CMOS-CMUT system is tested in the transmit mode and shows full functionality in air medium.

  19. 78 FR 16561 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... interest. Accordingly, at the $22.00 price point, both the entire amount of B4 and the remaining balance of...-side interest, Exchange systems would cancel the remaining balance of the incoming STPN order that... STPN could execute at multiple price points, the incoming STPN would execute at the multiple prices...

  20. Internal Snapping Hip Syndrome: Incidence of Multiple-Tendon Existence and Outcome After Endoscopic Transcapsular Release.

    PubMed

    Ilizaliturri, Victor M; Suarez-Ahedo, Carlos; Acuña, Marco

    2015-10-01

    To report the frequency of presentation of bifid or multiple iliopsoas tendons in patients who underwent endoscopic release for internal snapping hip syndrome (ISHS) and to compare both groups. A consecutive series of patients with ISHS were treated with endoscopic transcapsular release of the iliopsoas tendon at the central compartment and prospectively followed up. The inclusion criteria were patients with a diagnosis of ISHS with failure of conservative treatment. During the procedure, the presence of a bifid tendon was intentionally looked for. Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores were evaluated preoperatively and at last follow-up. Four patients presented with a bifid tendon and one patient had 3 tendons. At a minimum of 12 months' follow-up, the presence of snapping recurrence was evaluated and the WOMAC scores were compared between both groups. Among 279 hip arthroscopies, 28 patients underwent central transcapsular iliopsoas tendon release. The mean age was 29.25 years (range, 16 to 65 years; 6 left and 22 right hips). Group 1 included 5 patients with multiple tendons; the remaining patients formed group 2 (n = 23). None of the patients presented with ISHS recurrence. The mean WOMAC score in group 1 was 39 points (95% confidence interval [CI], 26.2 to 55.4 points) preoperatively and 73.6 points (95% CI, 68.4 to 79.6 points) at last follow-up. In group 2 the mean WOMAC score was 47.21 points (95% CI, 44.4 to 58.2 points) preoperatively and 77.91 points (95% CI, 67.8 to 83.4 points) at last follow-up. We identified a bifid tendon retrospectively on magnetic resonance arthrograms in 3 of the 5 cases that were found to have multiple tendons during surgery. None of these were recognized before the procedures. In this series the surgeon intentionally looked for multiple tendons, which were found in 17.85% of the cases. Clinical results in patients with single- and multiple-tendon snapping seem to be similarly adequate. However, the possibility of a type II error should be considered given the small number of patients. Level IV. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  1. Target proteins of ganoderic acid DM provides clues to various pharmacological mechanisms

    PubMed Central

    Liu, Jie; Shimizu, Kuniyoshi; Tanaka, Akinobu; Shinobu, Wakako; Ohnuki, Koichiro; Nakamura, Takanori; Kondo, Ryuichiro

    2012-01-01

    Ganoderma fungus (Ganodermataceae) is a multifunctional medicinal mushroom and has been traditionally used for the treatment of various types of disease. Ganoderic acid DM (1) is a representative triterpenoid isolated from G. lingzhi and exhibits various biological activities. However, a universal starting point that triggers multiple signaling pathways and results in multifunctionality of 1 is unknown. Here we demonstrate the important clues regarding the mechanisms underlying multi-medicinal action of 1. We examined structure–activity relationships between 1 and its analogs and found that the carbonyl group at C-3 was essential for cytotoxicity. Subsequently, we used 1-conjugated magnetic beads as a probe and identified tubulin as a specific 1-binding protein. Furthermore, 1 showed a similar Kd to that of vinblastine and also affected assembly of tubulin polymers. This study revealed multiple biological activities of 1 and may contribute to the design and development of new tubulin-inhibiting agents. PMID:23205267

  2. Sharing Overdose Data Across State Agencies to Inform Public Health Strategies: A Case Study.

    PubMed

    Cherico-Hsii, Sara; Bankoski, Andrea; Singal, Pooja; Horon, Isabelle; Beane, Eric; Casey, Meghan; Rebbert-Franklin, Kathleen; Sharfstein, Joshua

    2016-01-01

    Data sharing and analysis are important components of coordinated and cost-effective public health strategies. However, legal and policy barriers have made data from different agencies difficult to share and analyze for policy development. To address a rise in overdose deaths, Maryland used an innovative and focused approach to bring together data on overdose decedents across multiple agencies. The effort was focused on developing discrete intervention points based on information yielded on decedents' lives, such as vulnerability upon release from incarceration. Key aspects of this approach included gubernatorial leadership, a unified commitment to data sharing across agencies with memoranda of understanding, and designation of a data management team. Preliminary results have yielded valuable insights and have helped inform policy. This process of navigating legal and privacy concerns in data sharing across multiple agencies may be applied to a variety of public health problems challenging health departments across the country.

  3. Multiple Applications of Alamar Blue as an Indicator of Metabolic Function and Cellular Health in Cell Viability Bioassays

    PubMed Central

    Rampersad, Sephra N.

    2012-01-01

    Accurate prediction of the adverse effects of test compounds on living systems, detection of toxic thresholds, and expansion of experimental data sets to include multiple toxicity end-point analysis are required for any robust screening regime. Alamar Blue is an important redox indicator that is used to evaluate metabolic function and cellular health. The Alamar Blue bioassay has been utilized over the past 50 years to assess cell viability and cytotoxicity in a range of biological and environmental systems and in a number of cell types including bacteria, yeast, fungi, protozoa and cultured mammalian and piscine cells. It offers several advantages over other metabolic indicators and other cytotoxicity assays. However, as with any bioassay, suitability must be determined for each application and cell model. This review seeks to highlight many of the important considerations involved in assay use and design in addition to the potential pitfalls. PMID:23112716

  4. Spatially intensive sampling by electrofishing for assessing longitudinal discontinuities in fish distribution in a headwater stream

    USGS Publications Warehouse

    Le Pichon, Céline; Tales, Évelyne; Belliard, Jérôme; Torgersen, Christian E.

    2017-01-01

    Spatially intensive sampling by electrofishing is proposed as a method for quantifying spatial variation in fish assemblages at multiple scales along extensive stream sections in headwater catchments. We used this method to sample fish species at 10-m2 points spaced every 20 m throughout 5 km of a headwater stream in France. The spatially intensive sampling design provided information at a spatial resolution and extent that enabled exploration of spatial heterogeneity in fish assemblage structure and aquatic habitat at multiple scales with empirical variograms and wavelet analysis. These analyses were effective for detecting scales of periodicity, trends, and discontinuities in the distribution of species in relation to tributary junctions and obstacles to fish movement. This approach to sampling riverine fishes may be useful in fisheries research and management for evaluating stream fish responses to natural and altered habitats and for identifying sites for potential restoration.

  5. Multistability in Chua's circuit with two stable node-foci

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, B. C.; Wang, N.; Xu, Q.

    2016-04-15

    Only using one-stage op-amp based negative impedance converter realization, a simplified Chua's diode with positive outer segment slope is introduced, based on which an improved Chua's circuit realization with more simpler circuit structure is designed. The improved Chua's circuit has identical mathematical model but completely different nonlinearity to the classical Chua's circuit, from which multiple attractors including coexisting point attractors, limit cycle, double-scroll chaotic attractor, or coexisting chaotic spiral attractors are numerically simulated and experimentally captured. Furthermore, with dimensionless Chua's equations, the dynamical properties of the Chua's system are studied including equilibrium and stability, phase portrait, bifurcation diagram, Lyapunov exponentmore » spectrum, and attraction basin. The results indicate that the system has two symmetric stable nonzero node-foci in global adjusting parameter regions and exhibits the unusual and striking dynamical behavior of multiple attractors with multistability.« less

  6. Statistical Power in Evaluations That Investigate Effects on Multiple Outcomes: A Guide for Researchers

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2016-01-01

    In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…

  7. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  8. An analysis of the relationship of seven selected variables to State Board Test Pool Examination performance of the University of Tennessee, Knoxville, College of Nursing.

    PubMed

    Sharp, T G

    1984-02-01

    The study was designed to determine whether any one of seven selected variables or a combination of the variables is predictive of performance on the State Board Test Pool Examination. The selected variables studied were: high school grade point average (HSGPA), The University of Tennessee, Knoxville, College of Nursing grade point average (GPA), and American College Test Assessment (ACT) standard scores (English, ENG; mathematics, MA; social studies, SS; natural sciences, NSC; composite, COMP). Data utilized were from graduates of the baccalaureate program of The University of Tennessee, Knoxville, College of Nursing from 1974 through 1979. The sample of 322 was selected from a total population of 572. The Statistical Analysis System (SAS) was designed to accomplish analysis of the predictive relationship of each of the seven selected variables to State Board Test Pool Examination performance (result of pass or fail), a stepwise discriminant analysis was designed for determining the predictive relationship of the strongest combination of the independent variables to overall State Board Test Pool Examination performance (result of pass or fail), and stepwise multiple regression analysis was designed to determine the strongest predictive combination of selected variables for each of the five subexams of the State Board Test Pool Examination. The selected variables were each found to be predictive of SBTPE performance (result of pass or fail). The strongest combination for predicting SBTPE performance (result of pass or fail) was found to be GPA, MA, and NSC.

  9. Applying User Input to the Design and Testing of an Electronic Behavioral Health Information System for Wraparound Care Coordination

    PubMed Central

    Bruns, Eric J.; Hyde, Kelly L.; Sather, April; Hook, Alyssa; Lyon, Aaron R.

    2015-01-01

    Health information technology (HIT) and care coordination for individuals with complex needs are high priorities for quality improvement in health care. However, there is little empirical guidance about how best to design electronic health record systems and related technologies to facilitate implementation of care coordination models in behavioral health, or how best to apply user input to the design and testing process. In this paper, we describe an iterative development process that incorporated user/stakeholder perspectives at multiple points and resulted in an electronic behavioral health information system (EBHIS) specific to the wraparound care coordination model for youth with serious emotional and behavioral disorders. First, we review foundational HIT research on how EBHIS can enhance efficiency and outcomes of wraparound that was used to inform development. After describing the rationale for and functions of a prototype EBHIS for wraparound, we describe methods and results for a series of six small studies that informed system development across four phases of effort – predevelopment, development, initial user testing, and commercialization – and discuss how these results informed system design and refinement. Finally, we present next steps, challenges to dissemination, and guidance for others aiming to develop specialized behavioral health HIT. The research team's experiences reinforce the opportunity presented by EBHIS to improve care coordination for populations with complex needs, while also pointing to a litany of barriers and challenges to be overcome to implement such technologies. PMID:26060099

  10. Using virtual reality environment to improve joint attention associated with pervasive developmental disorder.

    PubMed

    Cheng, Yufang; Huang, Ruowen

    2012-01-01

    The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or dangerous consequences to deal with. Joint attention is a critical skill in the disorder characteristics of children with PDD. The absence of joint attention is a deficit frequently affects their social relationship in daily life. Therefore, this study designed the Joint Attention Skills Learning (JASL) systems with data glove tool to help children with PDD to practice joint attention behavior skills. The JASL specifically focus the skills of pointing, showing, sharing things and behavior interaction with other children with PDD. The system is designed in playroom-scene and presented in the first-person perspectives for users. The functions contain pointing and showing, moving virtual objects, 3D animation, text, speaking sounds, and feedback. The method was employed single subject multiple-probe design across subjects' designs, and analysis of visual inspection in this study. It took 3 months to finish the experimental section. Surprisingly, the experiment results reveal that the participants have further extension in improving the joint attention skills in their daily life after using the JASL system. The significant potential in this particular treatment of joint attention for each participant will be discussed in details in this paper. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Proteus: a reconfigurable computational network for computer vision

    NASA Astrophysics Data System (ADS)

    Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.

    1992-04-01

    The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.

  12. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar

    2016-08-01

    In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.

  13. Meta-q-plate for complex beam shaping

    PubMed Central

    Ji, Wei; Lee, Chun-Hong; Chen, Peng; Hu, Wei; Ming, Yang; Zhang, Lijian; Lin, Tsung-Hsien; Chigrinov, Vladimir; Lu, Yan-Qing

    2016-01-01

    Optical beam shaping plays a key role in optics and photonics. In this work, meta-q-plate featured by arbitrarily space-variant optical axes is proposed and demonstrated via liquid crystal photoalignment based on a polarization-sensitive alignment agent and a dynamic micro-lithography system. Meta-q-plates with multiple-, azimuthally/radially variant topological charges and initial azimuthal angles are fabricated. Accordingly, complex beams with elliptical, asymmetrical, multi-ringed and hurricane transverse profiles are generated, making the manipulation of optical vortex up to an unprecedented flexibility. The evolution, handedness and Michelson interferogram of the hurricane one are theoretically analysed and experimentally verified. The design facilitates the manipulation of polarization and spatial degrees of freedom of light in a point-to-point manner. The realization of meta-q-plate drastically enhances the capability of beam shaping and may pave a bright way towards optical manipulations, OAM based informatics, quantum optics and other fields. PMID:27149897

  14. Fabrication of Multi-point Side-Firing Optical Fiber by Laser Micro-ablation

    PubMed Central

    Nguyen, Hoang; Arnob, Md Masud Parvez; Becker, Aaron T; Wolfe, John C; Hogan, Matthew K; Horner, Philip J; Shih, Wei-Chuan

    2018-01-01

    A multi-point, side-firing design enables an optical fiber to output light at multiple desired locations along the fiber body. This provides advantages over traditional end-to-end fibers, especially in applications requiring fiber bundles such as brain stimulation or remote sensing. This paper demonstrates that continuous wave (CW) laser micro-ablation can controllably create conical-shaped cavities, or side windows, for outputting light. The dimensions of these cavities determine the amount of firing light and their firing angle. Experimental data show that a single side window on a 730 μm fiber can deliver more than 8 % of the input light. This was increased to more than 19 % on a 65 μm fiber with side windows created using femtosecond (fs) laser ablation and chemical etching. Fine control of light distribution along an optical fiber is critical for various biomedical applications such as light activated drug-release and optogenetics studies. PMID:28454166

  15. A Novel Design of Autonomously Healed Concrete: Towards a Vascular Healing Network

    PubMed Central

    Minnebo, Pieter; Thierens, Glenn; De Valck, Glenn; Van Tittelboom, Kim; De Belie, Nele; Van Hemelrijck, Danny; Tsangouri, Eleni

    2017-01-01

    Concrete is prone to crack formation in the tensile zone, which is why steel reinforcement is introduced in these zones. However, small cracks could still arise, which give liquids and gasses access to the reinforcement causing it to corrode. Self-healing concrete repairs and seals these small (300 µm) cracks, preventing the development of corrosion. In this study, a vascular system, carrying the healing agent, is developed. It consists of tubes connected to a 3D printed distribution piece. This distribution piece has four outlets that are connected to the tubes and has one inlet, which is accessible from outside. Several materials were considered for the tubes, i.e., polymethylmethacrylate, starch, inorganic phosphate cement and alumina. Three-point-bending and four-point-bending tests proved that self-healing and multiple self-healing is possible with this developed vascular system. PMID:28772409

  16. A Heuristic Model of Consciousness with Applications to the Development of Science and Society

    NASA Technical Reports Server (NTRS)

    Curreri, Peter A.

    2010-01-01

    A working model of consciousness is fundamental to understanding of the interactions of the observer in science. This paper examines contemporary understanding of consciousness. A heuristic model of consciousness is suggested that is consistent with psycophysics measurements of bandwidth of consciousness relative to unconscious perception. While the self reference nature of consciousness confers a survival benefit by assuring the all points of view regarding a problem are experienced in sufficiently large population, conscious bandwidth is constrained by design to avoid chaotic behavior. The multiple hypotheses provided by conscious reflection enable the rapid progression of science and technology. The questions of free will and the problem of attention are discussed in relation to the model. Finally the combination of rapid technology growth with the assurance of many unpredictable points of view is considered in respect to contemporary constraints to the development of society.

  17. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    NASA Astrophysics Data System (ADS)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  18. Mission design for NISAR repeat-pass Interferometric SAR

    NASA Astrophysics Data System (ADS)

    Alvarez-Salazar, Oscar; Hatch, Sara; Rocca, Jennifer; Rosen, Paul; Shaffer, Scott; Shen, Yuhsyen; Sweetser, Theodore; Xaypraseuth, Peter

    2014-10-01

    The proposed spaceborne NASA-ISRO SAR (NISAR) mission would use the repeat-pass interferometric Synthetic Aperture Radar (InSAR) technique to measure the changing shape of Earth's surface at the centimeter scale in support of investigations in solid Earth and cryospheric sciences. Repeat-pass InSAR relies on multiple SAR observations acquired from nearly identical positions of the spacecraft as seen from the ground. Consequently, there are tight constraints on the repeatability of the orbit, and given the narrow field of view of the radar antenna beam, on the repeatability of the beam pointing. The quality and accuracy of the InSAR data depend on highly precise control of both orbital position and observatory pointing throughout the science observation life of the mission. This paper describes preliminary NISAR requirements and rationale for orbit repeatability and attitude control in order to meet science requirements. A preliminary error budget allocation and an implementation approach to meet these allocations are also discussed.

  19. Meta-q-plate for complex beam shaping.

    PubMed

    Ji, Wei; Lee, Chun-Hong; Chen, Peng; Hu, Wei; Ming, Yang; Zhang, Lijian; Lin, Tsung-Hsien; Chigrinov, Vladimir; Lu, Yan-Qing

    2016-05-06

    Optical beam shaping plays a key role in optics and photonics. In this work, meta-q-plate featured by arbitrarily space-variant optical axes is proposed and demonstrated via liquid crystal photoalignment based on a polarization-sensitive alignment agent and a dynamic micro-lithography system. Meta-q-plates with multiple-, azimuthally/radially variant topological charges and initial azimuthal angles are fabricated. Accordingly, complex beams with elliptical, asymmetrical, multi-ringed and hurricane transverse profiles are generated, making the manipulation of optical vortex up to an unprecedented flexibility. The evolution, handedness and Michelson interferogram of the hurricane one are theoretically analysed and experimentally verified. The design facilitates the manipulation of polarization and spatial degrees of freedom of light in a point-to-point manner. The realization of meta-q-plate drastically enhances the capability of beam shaping and may pave a bright way towards optical manipulations, OAM based informatics, quantum optics and other fields.

  20. Predicting scientific oral presentation scores in a high school photonics science, technology, engineering and mathematics (STEM) program

    NASA Astrophysics Data System (ADS)

    Gilchrist, Pamela O.; Carpenter, Eric D.; Gray-Battle, Asia

    2014-07-01

    A hybrid teacher professional development, student science technology mathematics and engineering pipeline enrichment program was operated by the reporting research group for the past 3 years. Overall, the program has reached 69 students from 13 counties in North Carolina and 57 teachers from 30 counties spread over a total of five states. Quantitative analysis of oral presentations given by participants at a program event is provided. Scores from multiple raters were averaged and used as a criterion in several regression analyses. Overall it was revealed that student grade point averages, most advanced science course taken, extra quality points earned in their most advanced science course taken, and posttest scores on a pilot research design survey were significant predictors of student oral presentation scores. Rationale for findings, opportunities for future research, and implications for the iterative development of the program are discussed.

  1. An interactive modular design for computerized photometry in spectrochemical analysis

    NASA Technical Reports Server (NTRS)

    Bair, V. L.

    1980-01-01

    A general functional description of totally automatic photometry of emission spectra is not available for an operating environment in which the sample compositions and analysis procedures are low-volume and non-routine. The advantages of using an interactive approach to computer control in such an operating environment are demonstrated. This approach includes modular subroutines selected at multiple-option, menu-style decision points. This style of programming is used to trace elemental determinations, including the automated reading of spectrographic plates produced by a 3.4 m Ebert mount spectrograph using a dc-arc in an argon atmosphere. The simplified control logic and modular subroutine approach facilitates innovative research and program development, yet is easily adapted to routine tasks. Operator confidence and control are increased by the built-in options including degree of automation, amount of intermediate data printed out, amount of user prompting, and multidirectional decision points.

  2. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Multi-Component, Multi-Point Interferometric Rayleigh/Mie Doppler Velocimeter

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Lee, Joseph W.; Bivolaru, Daniel

    2012-01-01

    An interferometric Rayleigh scattering system was developed to enable the measurement of multiple, orthogonal velocity components at several points within very-high-speed or high-temperature flows. The velocity of a gaseous flow can be optically measured by sending laser light into the gas flow, and then measuring the scattered light signal that is returned from matter within the flow. Scattering can arise from either gas molecules within the flow itself, known as Rayleigh scattering, or from particles within the flow, known as Mie scattering. Measuring Mie scattering is the basis of all commercial laser Doppler and particle imaging velocimetry systems, but particle seeding is problematic when measuring high-speed and high-temperature flows. The velocimeter is designed to measure the Doppler shift from only Rayleigh scattering, and does not require, but can also measure, particles within the flow. The system combines a direct-view, large-optic interferometric setup that calculates the Doppler shift from fringe patterns collected with a digital camera, and a subsystem to capture and re-circulate scattered light to maximize signal density. By measuring two orthogonal components of the velocity at multiple positions in the flow volume, the accuracy and usefulness of the flow measurement increase significantly over single or nonorthogonal component approaches.

  4. A novel approach to the design of a needle driver with multiple DOFs for pediatric laparoscopic surgery.

    PubMed

    Fujii, Masahiro; Sugita, Naohiko; Ishimaru, Tetsuya; Iwanaka, Tadashi; Mitsuishi, Mamoru

    2013-02-01

    The objective of our research was to design and develop a novel needle driver with multiple degrees of freedom (DOFs) for pediatric laparoscopic surgery. Pediatric laparoscopic surgery has many advantages for patients, but the difficulty of the operation is increased due to many restrictions. For example, the motion of the needle driver is restricted by the insertion points, and the operation workspace is smaller in children than in adults. A needle driver with 3 DOFs and a 3.5-mm diameter is proposed and implemented in this study. Grasping DOF is achieved using a piston mechanism actuated by a wire. Deflection and rotation DOFs are actuated by gears. Experiments were conducted to evaluate the workspace and ligation force, and the results confirmed that the needle driver meets all the necessary requirements. Finally, a first reaction of a pediatric surgeon on the suturing and ligaturing capabilities of the prototype is reported. A multi-DOF needle driver with a new mechanism was proposed for pediatric laparoscopic surgery and a first prototype was developed. It is expected that further elaboration of the developed first prototype of the needle driver may contribute to the advancement of pediatric laparoscopic surgery.

  5. A Study of the Effect of the Front-End Styling of Sport Utility Vehicles on Pedestrian Head Injuries

    PubMed Central

    Qin, Qin; Chen, Zheng; Bai, Zhonghao; Cao, Libo

    2018-01-01

    Background The number of sport utility vehicles (SUVs) on China market is continuously increasing. It is necessary to investigate the relationships between the front-end styling features of SUVs and head injuries at the styling design stage for improving the pedestrian protection performance and product development efficiency. Methods Styling feature parameters were extracted from the SUV side contour line. And simplified finite element models were established based on the 78 SUV side contour lines. Pedestrian headform impact simulations were performed and validated. The head injury criterion of 15 ms (HIC15) at four wrap-around distances was obtained. A multiple linear regression analysis method was employed to describe the relationships between the styling feature parameters and the HIC15 at each impact point. Results The relationship between the selected styling features and the HIC15 showed reasonable correlations, and the regression models and the selected independent variables showed statistical significance. Conclusions The regression equations obtained by multiple linear regression can be used to assess the performance of SUV styling in protecting pedestrians' heads and provide styling designers with technical guidance regarding their artistic creations.

  6. Spacecraft transfer trajectory design exploiting resonant orbits in multi-body environments

    NASA Astrophysics Data System (ADS)

    Vaquero Escribano, Tatiana Mar

    Historically, resonant orbits have been employed in mission design for multiple planetary flyby trajectories and, more recently, as a source of long-term orbital stability. For instance, in support of a mission concept in NASA's Outer Planets Program, the Jupiter Europa Orbiter spacecraft is designed to encounter two different resonances with Europa during the 'endgame' phase, leading to Europa orbit insertion on the final pass. In 2011, the Interstellar Boundary Explorer spacecraft was inserted into a stable out-of-plane lunar-resonant orbit, the first of this type for a spacecraft in a long-term Earth orbit. However, resonant orbits have not yet been significantly explored as transfer mechanisms between non-resonant orbits in multi-body systems. This research effort focuses on incorporating resonant orbits into the design process to potentially enable the construction of more efficient or even novel transfer scenarios. Thus, the goals in this investigation are twofold: i) to expand the orbit architecture in multi-body environments by cataloging families of resonant orbits, and ii) to assess the role of such families in the design of transfer trajectories with specific patterns and itineraries. The benefits and advantages of employing resonant orbits in the design process are demonstrated through a variety of astrodynamics applications in several multi-body systems. In the Earth-Moon system, locally optimal transfer trajectories from low Earth orbit to selected libration point orbits are designed by leveraging conic arcs and invariant manifolds associated with resonant orbits. Resonant manifolds in the Earth-Moon system offer trajectories that tour the entire space within reasonable time intervals, facilitating the design of libration point orbit tours as well as Earth-Moon cyclers. In the Saturnian system, natural transitions between resonant and libration point orbits are sought and the problem of accessing Hyperion from orbits that are resonant with Titan is also examined. To add versatility to the proposed design method, a system translation technique enables the straightforward transition of solutions from the Earth-Moon system to any Sun-planet or planet-moon three-body system. The circular restricted three-body problem serves as a basis to quickly generate solutions that meet specific requirements, but candidate transfer trajectories are then transitioned to an ephemeris model for validation.

  7. Geodata Modeling and Query in Geographic Information Systems

    NASA Technical Reports Server (NTRS)

    Adam, Nabil

    1996-01-01

    Geographic information systems (GIS) deal with collecting, modeling, man- aging, analyzing, and integrating spatial (locational) and non-spatial (attribute) data required for geographic applications. Examples of spatial data are digital maps, administrative boundaries, road networks, and those of non-spatial data are census counts, land elevations and soil characteristics. GIS shares common areas with a number of other disciplines such as computer- aided design, computer cartography, database management, and remote sensing. None of these disciplines however, can by themselves fully meet the requirements of a GIS application. Examples of such requirements include: the ability to use locational data to produce high quality plots, perform complex operations such as network analysis, enable spatial searching and overlay operations, support spatial analysis and modeling, and provide data management functions such as efficient storage, retrieval, and modification of large datasets; independence, integrity, and security of data; and concurrent access to multiple users. It is on the data management issues that we devote our discussions in this monograph. Traditionally, database management technology have been developed for business applications. Such applications require, among other things, capturing the data requirements of high-level business functions and developing machine- level implementations; supporting multiple views of data and yet providing integration that would minimize redundancy and maintain data integrity and security; providing a high-level language for data definition and manipulation; allowing concurrent access to multiple users; and processing user transactions in an efficient manner. The demands on database management systems have been for speed, reliability, efficiency, cost effectiveness, and user-friendliness. Significant progress have been made in all of these areas over the last two decades to the point that many generalized database platforms are now available for developing data intensive applications that run in real-time. While continuous improvement is still being made at a very fast-paced and competitive rate, new application areas such as computer aided design, image processing, VLSI design, and GIS have been identified by many as the next generation of database applications. These new application areas pose serious challenges to the currently available database technology. At the core of these challenges is the nature of data that is manipulated. In traditional database applications, the database objects do not have any spatial dimension, and as such, can be thought of as point data in a multi-dimensional space. For example, each instance of an entity EMPLOYEE will have a unique value corresponding to every attribute such as employee id, employee name, employee address and so on. Thus, every Employee instance can be thought of as a point in a multi-dimensional space where each dimension is represented by an attribute. Furthermore, all operations on such data are one-dimensional. Thus, users may retrieve all entities satisfying one or more constraints. Examples of such constraints include employees with addresses in a certain area code, or salaries within a certain range. Even though constraints can be specified on multiple attributes (dimensions), the search for such data is essentially orthogonal across these dimensions.

  8. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  9. A review of radiative detachment studies in tokamak advanced magnetic divertor configurations

    DOE PAGES

    Soukhanovskii, V. A.

    2017-04-28

    The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less

  10. A review of radiative detachment studies in tokamak advanced magnetic divertor configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soukhanovskii, V. A.

    The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less

  11. Design of the Annular Suspension and Pointing System (ASPS) (including design addendum)

    NASA Technical Reports Server (NTRS)

    Cunningham, D.; Gismondi, T.; Hamilton, B.; Kendig, J.; Kiedrowski, J.; Vroman, A.; Wilson, G.

    1980-01-01

    The Annular Suspension and Pointing System is an experiment pointing mount designed for extremely precise 3 axis orientation of shuttle experiments. It utilizes actively controlled magnetic bearing to provide noncontacting vernier pointing and translational isolation of the experiment. The design of the system is presented and analyzed.

  12. Visual stimulus presentation using fiber optics in the MRI scanner.

    PubMed

    Huang, Ruey-Song; Sereno, Martin I

    2008-03-30

    Imaging the neural basis of visuomotor actions using fMRI is a topic of increasing interest in the field of cognitive neuroscience. One challenge is to present realistic three-dimensional (3-D) stimuli in the subject's peripersonal space inside the MRI scanner. The stimulus generating apparatus must be compatible with strong magnetic fields and must not interfere with image acquisition. Virtual 3-D stimuli can be generated with a stereo image pair projected onto screens or via binocular goggles. Here, we describe designs and implementations for automatically presenting physical 3-D stimuli (point-light targets) in peripersonal and near-face space using fiber optics in the MRI scanner. The feasibility of fiber-optic based displays was demonstrated in two experiments. The first presented a point-light array along a slanted surface near the body, and the second presented multiple point-light targets around the face. Stimuli were presented using phase-encoded paradigms in both experiments. The results suggest that fiber-optic based displays can be a complementary approach for visual stimulus presentation in the MRI scanner.

  13. Automatic identification of comparative effectiveness research from Medline citations to support clinicians’ treatment information needs

    PubMed Central

    Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo

    2014-01-01

    Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677

  14. Point-of-care testing: applications of 3D printing.

    PubMed

    Chan, Ho Nam; Tan, Ming Jun Andrew; Wu, Hongkai

    2017-08-08

    Point-of-care testing (POCT) devices fulfil a critical need in the modern healthcare ecosystem, enabling the decentralized delivery of imperative clinical strategies in both developed and developing worlds. To achieve diagnostic utility and clinical impact, POCT technologies are immensely dependent on effective translation from academic laboratories out to real-world deployment. However, the current research and development pipeline is highly bottlenecked owing to multiple restraints in material, cost, and complexity of conventionally available fabrication techniques. Recently, 3D printing technology has emerged as a revolutionary, industry-compatible method enabling cost-effective, facile, and rapid manufacturing of objects. This has allowed iterative design-build-test cycles of various things, from microfluidic chips to smartphone interfaces, that are geared towards point-of-care applications. In this review, we focus on highlighting recent works that exploit 3D printing in developing POCT devices, underscoring its utility in all analytical steps. Moreover, we also discuss key advantages of adopting 3D printing in the device development pipeline and identify promising opportunities in 3D printing technology that can benefit global health applications.

  15. Aerodynamic Performance Measurements for a Forward Swept Low Noise Fan

    NASA Technical Reports Server (NTRS)

    Fite, E. Brian

    2006-01-01

    One source of noise in high tip speed turbofan engines, caused by shocks, is called multiple pure tone noise (MPT's). A new fan, called the Quiet High Speed Fan (QHSF), showed reduced noise over the part speed operating range, which includes MPT's. The QHSF showed improved performance in most respects relative to a baseline fan; however, a partspeed instability discovered during testing reduced the operating range below acceptable limits. The measured QHSF adiabatic efficiency on the fixed nozzle acoustic operating line was 85.1 percent and the baseline fan 82.9 percent, a 2.2 percent improvement. The operating line pressure rise at design point rotational speed and mass flow was 1.764 and 1.755 for the QHSF and baseline fan, respectively. Weight flow at design point speed was 98.28 lbm/sec for the QHSF and 97.97 lbm/sec for the baseline fan. The operability margin for the QHSF approached 0 percent at the 75 percent speed operating condition. The baseline fan maintained sufficient margin throughout the operating range as expected. Based on the stage aerodynamic measurements, this concept shows promise for improved performance over current technology if the operability limitations can be solved.

  16. A New Approach to Identify Optimal Properties of Shunting Elements for Maximum Damping of Structural Vibration Using Piezoelectric Patches

    NASA Technical Reports Server (NTRS)

    Park, Junhong; Palumbo, Daniel L.

    2004-01-01

    The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.

  17. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  18. Design and performance of the SLD vertex detector: a 307 Mpixel tracking system

    NASA Astrophysics Data System (ADS)

    Abe, K.; Arodzero, A.; Baltay, C.; Brau, J. E.; Breidenbach, M.; Burrows, P. N.; Chou, A. S.; Crawford, G.; Damerell, C. J. S.; Dervan, P. J.; Dong, D. N.; Emmet, W.; English, R. L.; Etzion, E.; Foss, M.; Frey, R.; Haller, G.; Hasuko, K.; Hertzbach, S. S.; Hoeflich, J.; Huffer, M. E.; Jackson, D. J.; Jaros, J. A.; Kelsey, J.; Lee, I.; Lia, V.; Lintern, A. L.; Liu, M. X.; Manly, S. L.; Masuda, H.; McKemey, A. K.; Moore, T. B.; Nichols, A.; Nagamine, T.; Oishi, N.; Osborne, L. S.; Russell, J. J.; Ross, D.; Serbo, V. V.; Sinev, N. B.; Sinnott, J.; Skarpaas, K. Viii; Smy, M. B.; Snyder, J. A.; Strauss, M. G.; Dong, S.; Suekane, F.; Taylor, F. E.; Trandafir, A. I.; Usher, T.; Verdier, R.; Watts, S. J.; Weiss, E. R.; Yashima, J.; Yuta, H.; Zapalac, G.

    1997-02-01

    This paper describes the design, construction, and initial operation of SLD's upgraded vertex detector which comprises 96 two-dimensional charge-coupled devices (CCDs) with a total of 307 Mpixel. Each pixel functions as an independent particle detecting element, providing space point measurements of charged particle tracks with a typical precision of 4 μm in each co-ordinate. The CCDs are arranged in three concentric cylinders just outside the beam-pipe which surrounds the e +e - collision point of the SLAC Linear Collider (SLC). The detector is a powerful tool for distinguishing displaced vertex tracks, produced by decay in flight of heavy flavour hadrons or tau leptons, from tracks produced at the primary event vertex. The requirements for this detector include a very low mass structure (to minimize multiple scattering) both for mechanical support and to provide signal paths for the CCDs; operation at low temperature with a high degree of mechanical stability; and high speed CCD readout, signal processing, and data sparsification. The lessons learned in achieving these goals should be useful for the construction of large arrays of CCDs or active pixel devices in the future in a number of areas of science and technology.

  19. Propeller performance analysis and multidisciplinary optimization using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Burger, Christoph

    A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.

  20. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  1. Guidance and Navigation for Rendezvous and Proximity Operations with a Non-Cooperative Spacecraft at Geosynchronous Orbit

    NASA Technical Reports Server (NTRS)

    Barbee, Brent William; Carpenter, J. Russell; Heatwole, Scott; Markley, F. Landis; Moreau, Michael; Naasz, Bo J.; VanEepoel, John

    2010-01-01

    The feasibility and benefits of various spacecraft servicing concepts are currently being assessed, and all require that the servicer spacecraft perform rendezvous, proximity, and capture operations with the target spacecraft to be serviced. Many high-value spacecraft, which would be logical targets for servicing from an economic point of view, are located in geosynchronous orbit, a regime in which autonomous rendezvous and capture operations are not commonplace. Furthermore, existing GEO spacecraft were not designed to be serviced. Most do not have cooperative relative navigation sensors or docking features, and some servicing applications, such as de-orbiting of a non-functional spacecraft, entail rendezvous and capture with a spacecraft that may be non-functional or un-controlled. Several of these challenges have been explored via the design of a notional mission in which a nonfunctional satellite in geosynchronous orbit is captured by a servicer spacecraft and boosted into super-synchronous orbit for safe disposal. A strategy for autonomous rendezvous, proximity operations, and capture is developed, and the Orbit Determination Toolbox (ODTBX) is used to perform a relative navigation simulation to assess the feasibility of performing the rendezvous using a combination of angles-only and range measurements. Additionally, a method for designing efficient orbital rendezvous sequences for multiple target spacecraft is utilized to examine the capabilities of a servicer spacecraft to service multiple targets during the course of a single mission.

  2. 78 FR 16544 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... interest. Accordingly, at the $22.00 price point, both the entire amount of B4 and the remaining balance of...-STP opposite-side interest, Exchange systems would cancel the remaining balance of the incoming STPN.... If an STPN could execute at multiple price points, the incoming STPN would execute at the multiple...

  3. Assisting People with Disabilities Improves Their Collaborative Pointing Efficiency through the Use of the Mouse Scroll Wheel

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang

    2013-01-01

    This study provided that people with multiple disabilities can have a collaborative working chance in computer operations through an Enhanced Multiple Cursor Dynamic Pointing Assistive Program (EMCDPAP, a new kind of software that replaces the standard mouse driver, changes a mouse wheel into a thumb/finger poke detector, and manages mouse…

  4. Assisting People with Multiple Disabilities by Improving Their Computer Pointing Efficiency with an Automatic Target Acquisition Program

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Ching-Tien; Peng, Chin-Ling

    2011-01-01

    This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance through an Automatic Target Acquisition Program (ATAP) and a newly developed mouse driver (i.e. a new mouse driver replaces standard mouse driver, and is able to monitor mouse movement and intercept click action). Initially, both…

  5. Pointing control for the International Comet Mission

    NASA Technical Reports Server (NTRS)

    Leblanc, D. R.; Schumacher, L. L.

    1980-01-01

    The design of the pointing control system for the proposed International Comet Mission, intended to fly by Comet Halley and rendezvous with Comet Tempel-2 is presented. Following a review of mission objectives and the spacecraft configuration, design constraints on the pointing control system controlling the two-axis gimballed scan platform supporting the science instruments are discussed in relation to the scientific requirements of the mission. The primary design options considered for the pointing control system design for the baseline spacecraft are summarized, and the design selected, which employs a target-referenced, inertially stabilized control system, is described in detail. The four basic modes of operation of the pointing control subsystem (target acquisition, inertial hold, target track and slew) are discussed as they relate to operations at Halley and Tempel-2. It is pointed that the pointing control system design represents a significant advance in the state of the art of pointing controls for planetary missions.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Syed Haider; Lee, Jung-Ryul; Jang, Jae-Kyeong

    Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to anmore » explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests.« less

  7. Materials trade study for lunar/gateway missions.

    PubMed

    Tripathi, R K; Wilson, J W; Cucinotta, F A; Anderson, B M; Simonsen, L C

    2003-01-01

    The National Aeronautics and Space Administration (NASA) administrator has identified protection from radiation hazards as one of the two biggest problems of the agency with respect to human deep space missions. The intensity and strength of cosmic radiation in deep space makes this a 'must solve' problem for space missions. The Moon and two Earth-Moon Lagrange points near Moon are being proposed as hubs for deep space missions. The focus of this study is to identify approaches to protecting astronauts and habitats from adverse effects from space radiation both for single missions and multiple missions for career astronauts to these destinations. As the great cost of added radiation shielding is a potential limiting factor in deep space missions, reduction of mass, without compromising safety, is of paramount importance. The choice of material and selection of the crew profile play major roles in design and mission operations. Material trade studies in shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space mission's to two Earth-Moon co-linear Lagrange points (L1) between Earth and the Moon and (L2) on back side of the moon as seen from Earth, and to the Moon have been studied. It is found that, for single missions, current state-of-the-art knowledge of material provides adequate shielding. On the other hand, the choice of shield material is absolutely critical for career astronauts and revolutionary materials need to be developed for these missions. This study also provides a guide to the effectiveness of multifunctional materials in preparation for more detailed geometry studies in progress. c2003 COSPAR. Published by Elsevier Ltd. All rights reserved.

  8. Development of an FPGA-based multipoint laser pyroshock measurement system for explosive bolts

    NASA Astrophysics Data System (ADS)

    Abbas, Syed Haider; Jang, Jae-Kyeong; Lee, Jung-Ryul; Kim, Zaeill

    2016-07-01

    Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to an explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests.

  9. High performance multichannel photonic biochip sensors for future point of care diagnostics: an overview on two EU-sponsored projects

    NASA Astrophysics Data System (ADS)

    Giannone, Domenico; Kazmierczak, Andrzej; Dortu, Fabian; Vivien, Laurent; Sohlström, Hans

    2010-04-01

    We present here research work on two optical biosensors which have been developed within two separate European projects (6th and 7th EU Framework Programmes). The biosensors are based on the idea of a disposable biochip, integrating photonics and microfluidics, optically interrogated by a multichannel interrogation platform. The objective is to develop versatile tools, suitable for performing screening tests at Point of Care or for example, at schools or in the field. The two projects explore different options in terms of optical design and different materials. While SABIO used Si3N4/SiO2 ring resonators structures, P3SENS aims at the use of photonic crystal devices based on polymers, potentially a much more economical option. We discuss both approaches to show how they enable high sensitivity and multiple channel detection. The medium term objective is to develop a new detection system that has low cost and is portable but at the same time offering high sensitivity, selectivity and multiparametric detection from a sample containing various components (e.g. blood, serum, saliva, etc.). Most biological sensing devices already present on the market suffer from limitations in multichannel operation capability (either the detection of multiple analytes indicating a given pathology or the simultaneous detection of multiple pathologies). In other words, the number of different analytes that can be detected on a single chip is very limited. This limitation is a main issue addressed by the two projects. The excessive cost per test of conventional bio sensing devices is a second issue that is addressed.

  10. In-Space Networking on NASA's SCAN Testbed

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Eddy, Wesley M.; Clark, Gilbert J.; Johnson, Sandra K.

    2016-01-01

    The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios and a flight computer for supporting in-space communication research. New technologies being studied using the SCaN Testbed include advanced networking, coding, and modulation protocols designed to support the transition of NASAs mission systems from primarily point to point data links and preplanned routes towards adaptive, autonomous internetworked operations needed to meet future mission objectives. Networking protocols implemented on the SCaN Testbed include the Advanced Orbiting Systems (AOS) link-layer protocol, Consultative Committee for Space Data Systems (CCSDS) Encapsulation Packets, Internet Protocol (IP), Space Link Extension (SLE), CCSDS File Delivery Protocol (CFDP), and Delay-Tolerant Networking (DTN) protocols including the Bundle Protocol (BP) and Licklider Transmission Protocol (LTP). The SCaN Testbed end-to-end system provides three S-band data links and one Ka-band data link to exchange space and ground data through NASAs Tracking Data Relay Satellite System or a direct-to-ground link to ground stations. The multiple data links and nodes provide several upgradable elements on both the space and ground systems. This paper will provide a general description of the testbeds system design and capabilities, discuss in detail the design and lessons learned in the implementation of the network protocols, and describe future plans for continuing research to meet the communication needs for evolving global space systems.

  11. Patient preferences for attributes of multiple sclerosis disease-modifying therapies: development and results of a ratings-based conjoint analysis.

    PubMed

    Wilson, Leslie S; Loucks, Aimee; Gipson, Gregory; Zhong, Lixian; Bui, Christine; Miller, Elizabeth; Owen, Mary; Pelletier, Daniel; Goodin, Douglas; Waubant, Emmanuelle; McCulloch, Charles E

    2015-01-01

    Timely individualized treatment is essential to improving relapsing-remitting multiple sclerosis (RRMS) patient health outcomes, yet little is known about how patients make treatment decisions. We sought to evaluate RRMS patient preferences for risks and benefits of treatment. Fifty patients with RRMS completed conjoint analysis surveys with 16 hypothetical disease-modifying therapy (DMT) medication profiles developed using a fractional factorial design. Medication profiles were assigned preference ratings from 0 (not acceptable) to 10 (most favorable). Medication attributes included a range of benefits, adverse effects, administration routes, and market durations. Analytical models used linear mixed-effects regression. Participants showed the highest preference for medication profiles that would improve their symptoms (β = 0.81-1.03, P < .001), not a proven DMT outcome. Preventing relapses, the main clinical trial outcome, was not associated with significant preferences (P = .35). Each year of preventing magnetic resonance imaging changes and disease symptom progression showed DMT preferences of 0.17 point (β = 0.17, P = .002) and 0.12 point (β = 0.12, P < .001), respectively. Daily oral administration was preferred over all parenteral routes (P < .001). A 1% increase in death or severe disability decreased relative DMT preference by 1.15 points (P < .001). Patient preference focused on symptoms and prevention of progression but not on relapse prevention, the proven drug outcome. Patients were willing to accept some level of serious risk for certain types and amounts of benefits, and they strongly preferred daily oral administration over all other options.

  12. Stability of Geriatric Syndromes in Hospitalized Medicare Patients Discharged to Skilled Nursing Facilities

    PubMed Central

    Simmons, Sandra F.; Bell, Susan; Saraf, Avantika A.; Coelho, Chris Simon; Long, Emily A.; Jacobsen, J. Mary Lou; Schnelle, John F.; Vasilevskis, Eduard E.

    2016-01-01

    Objectives The purpose of this study was to assess multiple geriatric syndromes in a sample of older hospitalized patients discharged to skilled nursing facilities and, subsequently, to home to determine the prevalence and stability of each geriatric syndrome at the point of these care transitions. Design Descriptive, prospective study. Setting One large university-affiliated hospital and four area SNFs. Participants Fifty-eight hospitalized Medicare beneficiaries discharged to SNF. Measurements Research personnel conducted standardized assessments of the following geriatric syndromes at hospital discharge and two weeks following SNF discharge to home: cognitive impairment, depression, incontinence, unintentional weight loss, loss of appetite, pain, pressure ulcers, history of falls, mobility impairment and polypharmacy. Results The average number of geriatric syndromes per patient was 4.4 (± 1.2) at hospital discharge and 3.8 (±1.5) following SNF discharge. There was low to moderate stability for most syndromes. On average, participants had 2.9 syndromes that persisted across both care settings, 1.4 syndromes that resolved, and 0.7 new syndromes that developed between hospital and SNF discharge. Conclusion Geriatric syndromes were prevalent at the point of each care transition but also reflected significant within-individual variability. These findings suggest that multiple geriatric syndromes present during a hospital stay are not transient nor are most syndromes resolved prior to SNF discharge. These results underscore the importance of conducting standardized screening assessments at the point of each care transition and effectively communicating this information to the next provider to support the management of geriatric conditions. PMID:27590032

  13. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  14. Application of modern control theory to scheduling and path-stretching maneuvers of aircraft in the near terminal area

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1974-01-01

    A design concept of the dynamic control of aircraft in the near terminal area is discussed. An arbitrary set of nominal air routes, with possible multiple merging points, all leading to a single runway, is considered. The system allows for the automated determination of acceleration/deceleration of aircraft along the nominal air routes, as well as for the automated determination of path-stretching delay maneuvers. In addition to normal operating conditions, the system accommodates: (1) variable commanded separations over the outer marker to allow for takeoffs and between successive landings and (2) emergency conditions under which aircraft in distress have priority. The system design is based on a combination of three distinct optimal control problems involving a standard linear-quadratic problem, a parameter optimization problem, and a minimum-time rendezvous problem.

  15. Why Quantify Uncertainty in Ecosystem Studies: Obligation versus Discovery Tool?

    NASA Astrophysics Data System (ADS)

    Harmon, M. E.

    2016-12-01

    There are multiple motivations for quantifying uncertainty in ecosystem studies. One is as an obligation; the other is as a tool useful in moving ecosystem science toward discovery. While reporting uncertainty should become a routine expectation, a more convincing motivation involves discovery. By clarifying what is known and to what degree it is known, uncertainty analyses can point the way toward improvements in measurements, sampling designs, and models. While some of these improvements (e.g., better sampling designs) may lead to incremental gains, those involving models (particularly model selection) may require large gains in knowledge. To be fully harnessed as a discovery tool, attitudes toward uncertainty may have to change: rather than viewing uncertainty as a negative assessment of what was done, it should be viewed as positive, helpful assessment of what remains to be done.

  16. An equation-free approach to agent-based computation: Bifurcation analysis and control of stationary states

    NASA Astrophysics Data System (ADS)

    Siettos, C. I.; Gear, C. W.; Kevrekidis, I. G.

    2012-08-01

    We show how the equation-free approach can be exploited to enable agent-based simulators to perform system-level computations such as bifurcation, stability analysis and controller design. We illustrate these tasks through an event-driven agent-based model describing the dynamic behaviour of many interacting investors in the presence of mimesis. Using short bursts of appropriately initialized runs of the detailed, agent-based simulator, we construct the coarse-grained bifurcation diagram of the (expected) density of agents and investigate the stability of its multiple solution branches. When the mimetic coupling between agents becomes strong enough, the stable stationary state loses its stability at a coarse turning point bifurcation. We also demonstrate how the framework can be used to design a wash-out dynamic controller that stabilizes open-loop unstable stationary states even under model uncertainty.

  17. Beyond self-esteem: influence of multiple motives on identity construction.

    PubMed

    Vignoles, Vivian L; Regalia, Camillo; Manzi, Claudia; Golledge, Jen; Scabini, Eugenia

    2006-02-01

    Diverse theories suggest that people are motivated to maintain or enhance feelings of self-esteem, continuity, distinctiveness, belonging, efficacy, and meaning in their identities. Four studies tested the influence of these motives on identity construction, by using a multilevel regression design. Participants perceived as more central those identity elements that provided a greater sense of self-esteem, continuity, distinctiveness, and meaning; this was found for individual, relational, and group levels of identity, among various populations, and by using a prospective design. Motives for belonging and efficacy influenced identity definition indirectly through their direct influences on identity enactment and through their contributions to self-esteem. Participants were happiest about those identity elements that best satisfied motives for self-esteem and efficacy. These findings point to the need for an integrated theory of identity motivation. Copyright 2006 APA, all rights reserved.

  18. Teaching and Learning Activity Sequencing System using Distributed Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsunori; Ishikawa, Tomotake; Okamoto, Toshio

    The purpose of this study is development of a supporting system for teacher's design of lesson plan. Especially design of lesson plan which relates to the new subject "Information Study" is supported. In this study, we developed a system which generates teaching and learning activity sequences by interlinking lesson's activities corresponding to the various conditions according to the user's input. Because user's input is multiple information, there will be caused contradiction which the system should solve. This multiobjective optimization problem is resolved by Distributed Genetic Algorithms, in which some fitness functions are defined with reference models on lesson, thinking and teaching style. From results of various experiments, effectivity and validity of the proposed methods and reference models were verified; on the other hand, some future works on reference models and evaluation functions were also pointed out.

  19. MODEST: a web-based design tool for oligonucleotide-mediated genome engineering and recombineering

    PubMed Central

    Bonde, Mads T.; Klausen, Michael S.; Anderson, Mads V.; Wallin, Annika I.N.; Wang, Harris H.; Sommer, Morten O.A.

    2014-01-01

    Recombineering and multiplex automated genome engineering (MAGE) offer the possibility to rapidly modify multiple genomic or plasmid sites at high efficiencies. This enables efficient creation of genetic variants including both single mutants with specifically targeted modifications as well as combinatorial cell libraries. Manual design of oligonucleotides for these approaches can be tedious, time-consuming, and may not be practical for larger projects targeting many genomic sites. At present, the change from a desired phenotype (e.g. altered expression of a specific protein) to a designed MAGE oligo, which confers the corresponding genetic change, is performed manually. To address these challenges, we have developed the MAGE Oligo Design Tool (MODEST). This web-based tool allows designing of MAGE oligos for (i) tuning translation rates by modifying the ribosomal binding site, (ii) generating translational gene knockouts and (iii) introducing other coding or non-coding mutations, including amino acid substitutions, insertions, deletions and point mutations. The tool automatically designs oligos based on desired genotypic or phenotypic changes defined by the user, which can be used for high efficiency recombineering and MAGE. MODEST is available for free and is open to all users at http://modest.biosustain.dtu.dk. PMID:24838561

  20. Transonic airfoil design for helicopter rotor applications

    NASA Technical Reports Server (NTRS)

    Hassan, Ahmed A.; Jackson, B.

    1989-01-01

    Despite the fact that the flow over a rotor blade is strongly influenced by locally three-dimensional and unsteady effects, practical experience has always demonstrated that substantial improvements in the aerodynamic performance can be gained by improving the steady two-dimensional charateristics of the airfoil(s) employed. The two phenomena known to have great impact on the overall rotor performance are: (1) retreating blade stall with the associated large pressure drag, and (2) compressibility effects on the advancing blade leading to shock formation and the associated wave drag and boundary-layer separation losses. It was concluded that: optimization routines are a powerful tool for finding solutions to multiple design point problems; the optimization process must be guided by the judicious choice of geometric and aerodynamic constraints; optimization routines should be appropriately coupled to viscous, not inviscid, transonic flow solvers; hybrid design procedures in conjunction with optimization routines represent the most efficient approach for rotor airfroil design; unsteady effects resulting in the delay of lift and moment stall should be modeled using simple empirical relations; and inflight optimization of aerodynamic loads (e.g., use of variable rate blowing, flaps, etc.) can satisfy any number of requirements at design and off-design conditions.

  1. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  2. Three-Dimensional Registration for Handheld Profiling Systems Based on Multiple Shot Structured Light

    PubMed Central

    Ayaz, Shirazi Muhammad; Kim, Min Young

    2018-01-01

    In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552

  3. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach.

    PubMed

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-09-08

    The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. We included 62 interviews from 44 patients and 18 non-clinical caregivers. Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. We identified 5 broad themes that capture the patients' experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients' experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and modification. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Assisting People with Multiple Disabilities to Use Computers with Multiple Mice

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Ching-Tien

    2009-01-01

    This study assessed the combination of multiple mice aid with two persons with multiple disabilities. Complete mouse operation which needed the physically functional sound, was distributed among their limbs with remaining ability. Through these decentralized operations, they could still reach complete mouse pointing control. Initially, both…

  5. Design of relative motion and attitude profiles for three-dimensional resident space object imaging with a laser rangefinder

    NASA Astrophysics Data System (ADS)

    Nayak, M.; Beck, J.; Udrea, B.

    This paper focuses on the aerospace application of a single beam laser rangefinder (LRF) for 3D imaging, shape detection, and reconstruction in the context of a space-based space situational awareness (SSA) mission scenario. The primary limitation to 3D imaging from LRF point clouds is the one-dimensional nature of the single beam measurements. A method that combines relative orbital motion and scanning attitude motion to generate point clouds has been developed and the design and characterization of multiple relative motion and attitude maneuver profiles are presented. The target resident space object (RSO) has the shape of a generic telecommunications satellite. The shape and attitude of the RSO are unknown to the chaser satellite however, it is assumed that the RSO is un-cooperative and has fixed inertial pointing. All sensors in the metrology chain are assumed ideal. A previous study by the authors used pure Keplerian motion to perform a similar 3D imaging mission at an asteroid. A new baseline for proximity operations maneuvers for LRF scanning, based on a waypoint adaptation of the Hill-Clohessy-Wiltshire (HCW) equations is examined. Propellant expenditure for each waypoint profile is discussed and combinations of relative motion and attitude maneuvers that minimize the propellant used to achieve a minimum required point cloud density are studied. Both LRF strike-point coverage and point cloud density are maximized; the capability for 3D shape registration and reconstruction from point clouds generated with a single beam LRF without catalog comparison is proven. Next, a method of using edge detection algorithms to process a point cloud into a 3D modeled image containing reconstructed shapes is presented. Weighted accuracy of edge reconstruction with respect to the true model is used to calculate a qualitative “ metric” that evaluates effectiveness of coverage. Both edge recognition algorithms and the metric are independent of point cloud densit- , therefore they are utilized to compare the quality of point clouds generated by various attitude and waypoint command profiles. The RSO model incorporates diverse irregular protruding shapes, such as open sensor covers, instrument pods and solar arrays, to test the limits of the algorithms. This analysis is used to mathematically prove that point clouds generated by a single-beam LRF can achieve sufficient edge recognition accuracy for SSA applications, with meaningful shape information extractable even from sparse point clouds. For all command profiles, reconstruction of RSO shapes from the point clouds generated with the proposed method are compared to the truth model and conclusions are drawn regarding their fidelity.

  6. Comparing five front-of-pack nutrition labels' influence on consumers' perceptions and purchase intentions.

    PubMed

    Gorski Findling, Mary T; Werth, Paul M; Musicus, Aviva A; Bragg, Marie A; Graham, Dan J; Elbel, Brian; Roberto, Christina A

    2018-01-01

    In 2011, a National Academy of Medicine report recommended that packaged food in the U.S. display a uniform front-of-package nutrition label, using a system such as a 0-3 star ranking. Few studies have directly compared this to other labels to determine which best informs consumers and encourages healthier purchases. In 2013, we randomized adult participants (N=1247) in an Internet-based survey to one of six conditions: no label control; single traffic light; multiple traffic light; Facts Up Front; NuVal; or 0-3 star ranking. We compared groups on purchase intentions and accuracy of participants' interpretation of food labels. There were no differences in the nutritional quality of hypothetical shopping baskets across conditions (p=0.845). All labels improved consumers' abilities to judge the nutritional quality of foods relative to no label, but the best designs varied by outcomes. NuVal and multiple traffic light labels led to the greatest accuracy identifying the healthier of two products (p<0.001), while the multiple traffic light also led to the most accurate estimates of saturated fat, sugar, and sodium (p<0.001). The single traffic light outperformed other labels when participants compared nutrient levels between similar products (p<0.03). Single/multiple traffic light and Facts Up Front labels led to the most accurate calories per serving estimations (p<0.001). Although front-of-package labels helped participants more accurately assess products' nutrition information relative to no label, no conditions shifted adults' purchase intentions. Results did not point to a clearly superior label design, but they suggest that a 3-star label might not be best for educating consumers. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Quantitative optical imaging and sensing by joint design of point spread functions and estimation algorithms

    NASA Astrophysics Data System (ADS)

    Quirin, Sean Albert

    The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.

  8. PrimerDesign-M: A multiple-alignment based multiple-primer design tool for walking across variable genomes

    DOE PAGES

    Yoon, Hyejin; Leitner, Thomas

    2014-12-17

    Analyses of entire viral genomes or mtDNA requires comprehensive design of many primers across their genomes. In addition, simultaneous optimization of several DNA primer design criteria may improve overall experimental efficiency and downstream bioinformatic processing. To achieve these goals, we developed PrimerDesign-M. It includes several options for multiple-primer design, allowing researchers to efficiently design walking primers that cover long DNA targets, such as entire HIV-1 genomes, and that optimizes primers simultaneously informed by genetic diversity in multiple alignments and experimental design constraints given by the user. PrimerDesign-M can also design primers that include DNA barcodes and minimize primer dimerization. PrimerDesign-Mmore » finds optimal primers for highly variable DNA targets and facilitates design flexibility by suggesting alternative designs to adapt to experimental conditions.« less

  9. Pedagogical Content Knowledge in Special Needs Education: A Case Study of an Art Project with the Multiple/Severe Handicapped

    ERIC Educational Resources Information Center

    Murayama, Taku

    2016-01-01

    This paper focuses on a project in teacher education through art activities at the undergraduate level. The main theme is art activities by university students and multiple and severe handicapped students. This project has two significant points for the preparation of special education teachers. One point is the opportunity for field work. Even…

  10. A Case Study Application of the Aggregate Exposure Pathway (AEP) and Adverse Outcome Pathway (AOP) Frameworks to Facilitate the Integration of Human Health and Ecological End Points for Cumulative Risk Assessment (CRA)

    EPA Science Inventory

    Cumulative risk assessment (CRA) methods promote the use of a conceptual site model (CSM) to apportion exposures and integrate risk from multiple stressors. While CSMs may encompass multiple species, evaluating end points across taxa can be challenging due to data availability an...

  11. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.

  12. Management of Globally Distributed Software Development Projects in Multiple-Vendor Constellations

    NASA Astrophysics Data System (ADS)

    Schott, Katharina; Beck, Roman; Gregory, Robert Wayne

    Global information systems development outsourcing is an apparent trend that is expected to continue in the foreseeable future. Thereby, IS-related services are not only increasingly provided from different geographical sites simultaneously but beyond that from multiple service providers based in different countries. The purpose of this paper is to understand how the involvement of multiple service providers affects the management of the globally distributed information systems development projects. As research on this topic is scarce, we applied an exploratory in-depth single-case study design as research approach. The case we analyzed comprises a global software development outsourcing project initiated by a German bank together with several globally distributed vendors. For data collection and data analysis we have adopted techniques suggested by the grounded theory method. Whereas the extant literature points out the increased management overhead associated with multi-sourcing, the analysis of our case suggests that the required effort for managing global outsourcing projects with multiple vendors depends among other things on the maturation level of the cooperation within the vendor portfolio. Furthermore, our data indicate that this interplay maturity is positively impacted through knowledge about the client that has been derived based on already existing client-vendor relationships. The paper concludes by offering theoretical and practical implications.

  13. Treatment planning with intensity modulated particle therapy for multiple targets in stage IV non-small cell lung cancer

    NASA Astrophysics Data System (ADS)

    Anderle, Kristjan; Stroom, Joep; Vieira, Sandra; Pimentel, Nuno; Greco, Carlo; Durante, Marco; Graeff, Christian

    2018-01-01

    Intensity modulated particle therapy (IMPT) can produce highly conformal plans, but is limited in advanced lung cancer patients with multiple lesions due to motion and planning complexity. A 4D IMPT optimization including all motion states was expanded to include multiple targets, where each target (isocenter) is designated to specific field(s). Furthermore, to achieve stereotactic treatment planning objectives, target and OAR weights plus objective doses were automatically iteratively adapted. Finally, 4D doses were calculated for different motion scenarios. The results from our algorithm were compared to clinical stereotactic body radiation treatment (SBRT) plans. The study included eight patients with 24 lesions in total. Intended dose regimen for SBRT was 24 Gy in one fraction, but lower fractionated doses had to be delivered in three cases due to OAR constraints or failed plan quality assurance. The resulting IMPT treatment plans had no significant difference in target coverage compared to SBRT treatment plans. Average maximum point dose and dose to specific volume in OARs were on average 65% and 22% smaller with IMPT. IMPT could also deliver 24 Gy in one fraction in a patient where SBRT was limited due to the OAR vicinity. The developed algorithm shows the potential of IMPT in treatment of multiple moving targets in a complex geometry.

  14. Wide-band/angle Blazed Surfaces using Multiple Coupled Blazing Resonances

    PubMed Central

    Memarian, Mohammad; Li, Xiaoqiang; Morimoto, Yasuo; Itoh, Tatsuo

    2017-01-01

    Blazed gratings can reflect an oblique incident wave back in the path of incidence, unlike mirrors and metal plates that only reflect specular waves. Perfect blazing (and zero specular scattering) is a type of Wood’s anomaly that has been observed when a resonance condition occurs in the unit-cell of the blazed grating. Such elusive anomalies have been studied thus far as individual perfect blazing points. In this work, we present reflective blazed surfaces that, by design, have multiple coupled blazing resonances per cell. This enables an unprecedented way of tailoring the blazing operation, for widening and/or controlling of blazing bandwidth and incident angle range of operation. The surface can thus achieve blazing at multiple wavelengths, each corresponding to different incident wavenumbers. The multiple blazing resonances are combined similar to the case of coupled resonator filters, forming a blazing passband between the incident wave and the first grating order. Blazed gratings with single and multi-pole blazing passbands are fabricated and measured showing increase in the bandwidth of blazing/specular-reflection-rejection, demonstrated here at X-band for convenience. If translated to appropriate frequencies, such technique can impact various applications such as Littrow cavities and lasers, spectroscopy, radar, and frequency scanned antenna reflectors. PMID:28211506

  15. Multicriteria Analysis of Assembling Buildings from Steel Frame Structures

    NASA Astrophysics Data System (ADS)

    Miniotaite, Ruta

    2017-10-01

    Steel frame structures are often used in the construction of public and industrial buildings. They are used for: all types of slope roofs; walls of newly-built public and industrial buildings; load bearing structures; roofs of renovated buildings. The process of assembling buildings from steel frame structures should be analysed as an integrated process influenced by such factors as construction materials and machinery used, the qualification level of construction workers, complexity of work, available finance. It is necessary to find a rational technological design solution for assembling buildings from steel frame structures by conducting a multiple criteria analysis. The analysis provides a possibility to evaluate the engineering considerations and find unequivocal solutions. The rational alternative of a complex process of assembling buildings from steel frame structures was found through multiple criteria analysis and multiple criteria evaluation. In multiple criteria evaluation of technological solutions for assembling buildings from steel frame structures by pairwise comparison method the criteria by significance are distributed as follows: durability is the most important criterion in the evaluation of alternatives; the price (EUR/unit of measurement) of a part of assembly process; construction workers’ qualification level (category); mechanization level of a part of assembling process (%), and complexity of assembling work (in points) are less important criteria.

  16. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  17. Structural Optimization of a Force Balance Using a Computational Experiment Design

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2002-01-01

    This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.

  18. 77 FR 40490 - Revocation and Modification of Multiple Domestic, Alaskan, and Hawaiian Compulsory Reporting Points

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... FAA aeronautical database as compulsory reporting points. Additionally, this action also requires... aeronautical database. DATES: Effective date 0901 UTC July 10, 2012. The Director of the Federal Register... FAA's aeronautical database as reporting points. The reporting points included five Domestic Reporting...

  19. Behavior of an aeroelastic system beyond critical point of instability

    NASA Astrophysics Data System (ADS)

    Sekar, T. Chandra; Agarwal, Ravindra; Mandal, Alakesh Chandra; Kushari, Abhijit

    2017-11-01

    Understanding the behavior of an aeroelastic system beyond the critical point is essential for effective implementation of any active control scheme since the control system design depends on the type of instability (bifurcation) the system encounters. Previous studies had found the aeroelastic system to enter into chaos beyond the point of instability. In the present work, an attempt has been made to carry out an experimental study on an aeroelastic model placed in a wind tunnel, to understand the behavior of aerodynamics around a wing section undergoing classical flutter. Wind speed was increased from zero until the model encountered flutter. Pressure at various locations along the surface of wing and acceleration at multiple points on the wing were measured in real time for the entire duration of experiment. A Leading Edge Separation Bubble (LSB) was observed beyond the critical point. The growing strength of the LSB with increasing wind speed was found to alter the aerodynamic moment acting on the system, which forced the system to enter into a second bifurcation. Based on the nature of the response, the system appears to undergo periodic doubling bifurcation rather than Hopf-bifurcation, resulting in chaotic motion. Eliminating the LSB can help in preventing the system from entering chaos. Any active flow control scheme that can avoid or counter the formation of leading edge separation bubble can be a potential solution to control the classical flutter.

  20. High Temperature Gas-Cooled Test Reactor Point Design: Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville

    2016-01-01

    A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.

  1. High Temperature Gas-Cooled Test Reactor Point Design: Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville

    2016-03-01

    A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.

  2. Design of the dual-buoy wave energy converter based on actual wave data of East Sea

    NASA Astrophysics Data System (ADS)

    Kim, Jeongrok; Kweon, Hyuck-Min; Jeong, Weon-Mu; Cho, Il-Hyoung; Cho, Hong-Yeon

    2015-07-01

    A new conceptual dual-buoy Wave Energy Converter (WEC) for the enhancement of energy extraction efficiency is suggested. Based on actual wave data, the design process for the suggested WEC is conducted in such a way as to ensure that it is suitable in real sea. Actual wave data measured in Korea's East Sea (position: 36.404 N° and 129.274 E°) from May 1, 2002 to March 29, 2005 were used as the input wave spectrum for the performance estimation of the dual-buoy WEC. The suggested WEC, a point absorber type, consists of two concentric floating circular cylinders (an inner and a hollow outer buoy). Multiple resonant frequencies in proposed WEC affect the Power Ttake-off (PTO) performance of the WEC. Based on the numerical results, several design strategies are proposed to further enhance the extraction efficiency, including intentional mismatching among the heave natural frequencies of dual buoys, the natural frequency of the internal fluid, and the peak frequency of the input wave spectrum.

  3. Achieving integration in mixed methods designs-principles and practices.

    PubMed

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  4. Cyanide binding to human plasma heme-hemopexin: A comparative study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ascenzi, Paolo, E-mail: ascenzi@uniroma3.it; Istituto Nazionale di Biostrutture e Biosistemi, Roma; Leboffe, Loris

    Highlights: Black-Right-Pointing-Pointer Cyanide binding to ferric HHPX-heme-Fe. Black-Right-Pointing-Pointer Cyanide binding to ferrous HHPX-heme-Fe. Black-Right-Pointing-Pointer Dithionite-mediated reduction of ferric HHPX-heme-Fe-cyanide. Black-Right-Pointing-Pointer Cyanide binding to HHPX-heme-Fe is limited by ligand deprotonation. Black-Right-Pointing-Pointer Cyanide dissociation from HHPX-heme-Fe-cyanide is limited by ligand protonation. -- Abstract: Hemopexin (HPX) displays a pivotal role in heme scavenging and delivery to the liver. In turn, heme-Fe-hemopexin (HPX-heme-Fe) displays heme-based spectroscopic and reactivity properties. Here, kinetics and thermodynamics of cyanide binding to ferric and ferrous hexa-coordinate human plasma HPX-heme-Fe (HHPX-heme-Fe(III) and HHPX-heme-Fe(II), respectively), and for the dithionite-mediated reduction of the HHPX-heme-Fe(III)-cyanide complex, at pH 7.4 and 20.0 Degree-Sign C,more » are reported. Values of thermodynamic and kinetic parameters for cyanide binding to HHPX-heme-Fe(III) and HHPX-heme-Fe(II) are K = (4.1 {+-} 0.4) Multiplication-Sign 10{sup -6} M, k{sub on} = (6.9 {+-} 0.5) Multiplication-Sign 10{sup 1} M{sup -1} s{sup -1}, and k{sub off} = 2.8 Multiplication-Sign 10{sup -4} s{sup -1}; and H = (6 {+-} 1) Multiplication-Sign 10{sup -1} M, h{sub on} = 1.2 Multiplication-Sign 10{sup -1} M{sup -1} s{sup -1}, and h{sub off} = (7.1 {+-} 0.8) Multiplication-Sign 10{sup -2} s{sup -1}, respectively. The value of the rate constant for the dithionite-mediated reduction of the HHPX-heme-Fe(III)-cyanide complex is l = 8.9 {+-} 0.8 M{sup -1/2} s{sup -1}. HHPX-heme-Fe reactivity is modulated by proton acceptor/donor amino acid residue(s) (e.g., His236) assisting the deprotonation and protonation of the incoming and outgoing ligand, respectively.« less

  5. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  6. LED-driven backlights for automotive displays

    NASA Astrophysics Data System (ADS)

    Strauch, Frank

    2007-09-01

    As a light source the LED has some advantage over the traditionally used fluorescence tube such as longer life or lower space consumption. Consequently customers are asking for the LED lighting design in their products. We introduced in a company owned backlight the white LED technology. This step opens the possibility to have access to the components in the display market. Instead of having a finalized display product which needs to be integrated in the head unit of a car we assemble the backlight, the glass, own electronics and the housing. A major advantage of this concept is the better control of the heat flow generated by the LEDs to the outer side because only a common housing is used for all the components. Also the requirement for slim products can be fulfilled. As always a new technology doesn't come with advantages only. An LED represents a point source compared to the well-known tube thus requiring a mixing zone for the multiple point sources when they enter a light guide. This zone can't be used in displays because of the lack of homogeneity. It's a design goal to minimize this zone which can be helped by the right choice of the LED in terms of slimness. A step ahead is the implementation of RGB LEDs because of their higher color rendering abilities. This allows for the control of the chromaticity point under temperature change but as a drawback needs a larger mixing zone.

  7. Combined trellis coding with asymmetric MPSK modulation: An MSAT-X report

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1985-01-01

    Traditionally symmetric, multiple phase-shift-keyed (MPSK) signal constellations, i.e., those with uniformly spaced signal points around the circle, have been used for both uncoded and coded systems. Although symmetric MPSK signal constellations are optimum for systems with no coding, the same is not necessarily true for coded systems. This appears to show that by designing the signal constellations to be asymmetric, one can, in many instances, obtain a significant performance improvement over the traditional symmetric MPSK constellations combined with trellis coding. The joint design of n/(n + 1) trellis codes and asymmetric 2 sup n + 1 - point MPSK is considered, which has a unity bandwidth expansion relative to uncoded 2 sup n-point symmetric MPSK. The asymptotic performance gains due to coding and asymmetry are evaluated in terms of the minimum free Euclidean distance free of the trellis. A comparison of the maximum value of this performance measure with the minimum distance d sub min of the uncoded system is an indication of the maximum reduction in required E sub b/N sub O that can be achieved for arbitrarily small system bit-error rates. It is to be emphasized that the introduction of asymmetry into the signal set does not effect the bandwidth of power requirements of the system; hence, the above-mentioned improvements in performance come at little or no cost. MPSK signal sets in coded systems appear in the work of Divsalar.

  8. 10 CFR Appendix N to Part 52 - Standardization of Nuclear Power Plant Designs: Combined Licenses To Construct and Operate...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Licenses To Construct and Operate Nuclear Power Reactors of Identical Design at Multiple Sites N Appendix N... Designs: Combined Licenses To Construct and Operate Nuclear Power Reactors of Identical Design at Multiple... construct and operate nuclear power reactors of identical design (“common design”) to be located at multiple...

  9. Cascade Optimization Strategy Maximizes Thrust for High-Speed Civil Transport Propulsion System Concept

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The design of a High-Speed Civil Transport (HSCT) air-breathing propulsion system for multimission, variable-cycle operations was successfully optimized through a soft coupling of the engine performance analyzer NASA Engine Performance Program (NEPP) to a multidisciplinary optimization tool COMETBOARDS that was developed at the NASA Lewis Research Center. The design optimization of this engine was cast as a nonlinear optimization problem, with engine thrust as the merit function and the bypass ratios, r-values of fans, fuel flow, and other factors as important active design variables. Constraints were specified on factors including the maximum speed of the compressors, the positive surge margins for the compressors with specified safety factors, the discharge temperature, the pressure ratios, and the mixer extreme Mach number. Solving the problem by using the most reliable optimization algorithm available in COMETBOARDS would provide feasible optimum results only for a portion of the aircraft flight regime because of the large number of mission points (defined by altitudes, Mach numbers, flow rates, and other factors), diverse constraint types, and overall poor conditioning of the design space. Only the cascade optimization strategy of COMETBOARDS, which was devised especially for difficult multidisciplinary applications, could successfully solve a number of engine design problems for their flight regimes. Furthermore, the cascade strategy converged to the same global optimum solution even when it was initiated from different design points. Multiple optimizers in a specified sequence, pseudorandom damping, and reduction of the design space distortion via a global scaling scheme are some of the key features of the cascade strategy. HSCT engine concept, optimized solution for HSCT engine concept. A COMETBOARDS solution for an HSCT engine (Mach-2.4 mixed-flow turbofan) along with its configuration is shown. The optimum thrust is normalized with respect to NEPP results. COMETBOARDS added value in the design optimization of the HSCT engine.

  10. A coaxial slot antenna with frequency of 433 MHz for microwave ablation therapies: design, simulation, and experimental research.

    PubMed

    Jiang, Yingxu; Zhao, Jinzhe; Li, Weitao; Yang, Yamin; Liu, Jia; Qian, Zhiyu

    2017-11-01

    Investigation of the structures and properties of antennas is important in the design of microwave ablation (MWA) system. In this study, we studied the performance of the novel tri- and single-slot antennas with frequency of 433 MHz in ex vivo conditions. The dielectric properties of liver tissue under different thermal coagulation levels were explored, which was beneficial to evaluate ablation condition of tissue and simulate temperature field. Then, the performances of the antennas were analyzed by using numerical method based on finite element method (FEM). It indicated that the present antennas with frequency of 433 MHz could produce a gourd-shaped MWA area with a longer length. Compared to antenna with frequency of 2450 MHz, the designed single-slot antenna could obtain the larger MWA area. In addition, the multiple-point ablations and a larger MWA area could be achieved simultaneously by using the present tri-slot antenna. This study has a potential for the innovative design of MWA antenna for treatment of liver tumor with a large range and a long length.

  11. Computed tomography-based tissue-engineered scaffolds in craniomaxillofacial surgery.

    PubMed

    Smith, M H; Flanagan, C L; Kemppainen, J M; Sack, J A; Chung, H; Das, S; Hollister, S J; Feinberg, S E

    2007-09-01

    Tissue engineering provides an alternative modality allowing for decreased morbidity of donor site grafting and decreased rejection of less compatible alloplastic tissues. Using image-based design and computer software, a precisely sized and shaped scaffold for osseous tissue regeneration can be created via selective laser sintering. Polycaprolactone has been used to create a condylar ramus unit (CRU) scaffold for application in temporomandibular joint reconstruction in a Yucatan minipig animal model. Following sacrifice, micro-computed tomography and histology was used to demonstrate the efficacy of this particular scaffold design. A proof-of-concept surgery has demonstrated cartilaginous tissue regeneration along the articulating surface with exuberant osseous tissue formation. Bone volumes and tissue mineral density at both the 1 and 3 month time points demonstrated significant new bone growth interior and exterior to the scaffold. Computationally designed scaffolds can support masticatory function in a large animal model as well as both osseous and cartilage regeneration. Our group is continuing to evaluate multiple implant designs in both young and mature Yucatan minipig animals. 2007 John Wiley & Sons, Ltd.

  12. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  13. Effects of Differential Reinforcement and Rules With Feedback on Preference for Choice and Verbal Reports

    PubMed Central

    Karsina, Allen; Thompson, Rachel H; Rodriguez, Nicole M; Vanselow, Nicholas R

    2012-01-01

    We evaluated the effects of differential reinforcement and accurate verbal rules with feedback on the preference for choice and the verbal reports of 6 adults. Participants earned points on a probabilistic schedule by completing the terminal links of a concurrent-chains arrangement in a computer-based game of chance. In free-choice terminal links, participants selected 3 numbers from an 8-number array; in restricted-choice terminal links participants selected the order of 3 numbers preselected by a computer program. A pop-up box then informed the participants if the numbers they selected or ordered matched or did not match numbers generated by the computer but not displayed; matching in a trial resulted in one point earned. In baseline sessions, schedules of reinforcement were equal across free- and restricted-choice arrangements and a running tally of points earned was shown each trial. The effects of differentially reinforcing restricted-choice selections were evaluated using a reversal design. For 4 participants, the effects of providing a running tally of points won by arrangement and verbal rules regarding the schedule of reinforcement were also evaluated using a nonconcurrent multiple-baseline-across-participants design. Results varied across participants but generally demonstrated that (a) preference for choice corresponded more closely to verbal reports of the odds of winning than to reinforcement schedules, (b) rules and feedback were correlated with more accurate verbal reports, and (c) preference for choice corresponded more highly to the relative number of reinforcements obtained across free- and restricted-choice arrangements in a session than to the obtained probability of reinforcement or to verbal reports of the odds of winning. PMID:22754103

  14. Effects of differential reinforcement and rules with feedback on preference for choice and verbal reports.

    PubMed

    Karsina, Allen; Thompson, Rachel H; Rodriguez, Nicole M; Vanselow, Nicholas R

    2012-01-01

    We evaluated the effects of differential reinforcement and accurate verbal rules with feedback on the preference for choice and the verbal reports of 6 adults. Participants earned points on a probabilistic schedule by completing the terminal links of a concurrent-chains arrangement in a computer-based game of chance. In free-choice terminal links, participants selected 3 numbers from an 8-number array; in restricted-choice terminal links participants selected the order of 3 numbers preselected by a computer program. A pop-up box then informed the participants if the numbers they selected or ordered matched or did not match numbers generated by the computer but not displayed; matching in a trial resulted in one point earned. In baseline sessions, schedules of reinforcement were equal across free- and restricted-choice arrangements and a running tally of points earned was shown each trial. The effects of differentially reinforcing restricted-choice selections were evaluated using a reversal design. For 4 participants, the effects of providing a running tally of points won by arrangement and verbal rules regarding the schedule of reinforcement were also evaluated using a nonconcurrent multiple-baseline-across-participants design. Results varied across participants but generally demonstrated that (a) preference for choice corresponded more closely to verbal reports of the odds of winning than to reinforcement schedules, (b) rules and feedback were correlated with more accurate verbal reports, and (c) preference for choice corresponded more highly to the relative number of reinforcements obtained across free- and restricted-choice arrangements in a session than to the obtained probability of reinforcement or to verbal reports of the odds of winning.

  15. Extreme multistability in a memristor-based multi-scroll hyper-chaotic system.

    PubMed

    Yuan, Fang; Wang, Guangyi; Wang, Xiaowei

    2016-07-01

    In this paper, a new memristor-based multi-scroll hyper-chaotic system is designed. The proposed memristor-based system possesses multiple complex dynamic behaviors compared with other chaotic systems. Various coexisting attractors and hidden coexisting attractors are observed in this system, which means extreme multistability arises. Besides, by adjusting parameters of the system, this chaotic system can perform single-scroll attractors, double-scroll attractors, and four-scroll attractors. Basic dynamic characteristics of the system are investigated, including equilibrium points and stability, bifurcation diagrams, Lyapunov exponents, and so on. In addition, the presented system is also realized by an analog circuit to confirm the correction of the numerical simulations.

  16. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  17. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Estimating open population site occupancy from presence-absence data lacking the robust design.

    PubMed

    Dail, D; Madsen, L

    2013-03-01

    Many animal monitoring studies seek to estimate the proportion of a study area occupied by a target population. The study area is divided into spatially distinct sites where the detected presence or absence of the population is recorded, and this is repeated in time for multiple seasons. However, when occupied sites are detected with probability p < 1, the lack of a detection does not imply lack of occupancy. MacKenzie et al. (2003, Ecology 84, 2200-2207) developed a multiseason model for estimating seasonal site occupancy (ψt ) while accounting for unknown p. Their model performs well when observations are collected according to the robust design, where multiple sampling occasions occur during each season; the repeated sampling aids in the estimation p. However, their model does not perform as well when the robust design is lacking. In this paper, we propose an alternative likelihood model that yields improved seasonal estimates of p and Ψt in the absence of the robust design. We construct the marginal likelihood of the observed data by conditioning on, and summing out, the latent number of occupied sites during each season. A simulation study shows that in cases without the robust design, the proposed model estimates p with less bias than the MacKenzie et al. model and hence improves the estimates of Ψt . We apply both models to a data set consisting of repeated presence-absence observations of American robins (Turdus migratorius) with yearly survey periods. The two models are compared to a third estimator available when the repeated counts (from the same study) are considered, with the proposed model yielding estimates of Ψt closest to estimates from the point count model. Copyright © 2013, The International Biometric Society.

  19. Higher Moments of Net-Kaon Multiplicity Distributions at STAR

    NASA Astrophysics Data System (ADS)

    Xu, Ji; STAR Collaboration

    2017-01-01

    Fluctuations of conserved quantities such as baryon number (B), electric charge number (Q), and strangeness number (S), are sensitive to the correlation length and can be used to probe non-gaussian fluctuations near the critical point. Experimentally, higher moments of the multiplicity distributions have been used to search for the QCD critical point in heavy-ion collisions. In this paper, we report the efficiency-corrected cumulants and their ratios of mid-rapidity (|y| < 0.5) net-kaon multiplicity distributions in Au+Au collisions at = 7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV collected in 2010, 2011, and 2014 with STAR at RHIC. The centrality and energy dependence of the cumulants and their ratios, are presented. Furthermore, the comparisons with baseline calculations (Poisson) and non-critical-point models (UrQMD) are also discussed.

  20. Quasi-experimental designs in practice-based research settings: design and implementation considerations.

    PubMed

    Handley, Margaret A; Schillinger, Dean; Shiboski, Stephen

    2011-01-01

    Although randomized controlled trials are often a gold standard for determining intervention effects, in the area of practice-based research (PBR), there are many situations in which individual randomization is not possible. Alternative approaches to evaluating interventions have received increased attention, particularly those that can retain elements of randomization such that they can be considered "controlled" trials. Methodological design elements and practical implementation considerations for two quasi-experimental design approaches that have considerable promise in PBR settings--the stepped-wedge design, and a variant of this design, a wait-list cross-over design, are presented along with a case study from a recent PBR intervention for patients with diabetes. PBR-relevant design features include: creation of a cohort over time that collects control data but allows all participants (clusters or patients) to receive the intervention; staggered introduction of clusters; multiple data collection points; and one-way cross-over into the intervention arm. Practical considerations include: randomization versus stratification, training run in phases; and extended time period for overall study completion. Several design features of practice based research studies can be adapted to local circumstances yet retain elements to improve methodological rigor. Studies that utilize these methods, such as the stepped-wedge design and the wait-list cross-over design, can increase the evidence base for controlled studies conducted within the complex environment of PBR.

  1. Path optimization method for the sign problem

    NASA Astrophysics Data System (ADS)

    Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji

    2018-03-01

    We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  2. Nutrition content of brisket point end of part Simental Ongole Crossbred meat in boiled various temperature

    NASA Astrophysics Data System (ADS)

    Riyanto, J.; Sudibya; Cahyadi, M.; Aji, A. P.

    2018-01-01

    This aim of this study was to determine the quality of nutritional contents of beef brisket point end of Simental Ongole Crossbred meat in various boiling temperatures. Simental Ongole Crossbred had been fattened for 9 months. Furthermore, they were slaughtered at slaughterhouse and brisket point end part of meat had been prepared to analyse its nutritional contents using Food Scan. These samples were then boiled at 100°C for 0 (TR), 15 (R15), and 30 (R30) minutes, respectively. The data was analysed using Randomized Complete Design (CRD) and Duncan’s multiple range test (DMRT) had been conducted to differentiate among three treatments. The results showed that boiling temperatures significantly affected moisture, and cholesterol contents of beef (P<0.05) while fat content was not significantly affected by boiling temperatures. The boiling temperature decreased beef water contents from 72.77 to 70.84%, on the other hand, the treatment increased beef protein and cholesterol contents from 20.77 to 25.14% and 47.55 to 50.45 mg/100g samples, respectively. The conclusion of this study was boiling of beef at 100°C for 15 minutes and 30 minutes decreasing water content and increasing protein and cholesterol contents of brisket point end of Simental Ongole Crossbred beef.

  3. Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework

    ERIC Educational Resources Information Center

    Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R.

    2014-01-01

    In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…

  4. Platform for efficient switching between multiple devices in the intensive care unit.

    PubMed

    De Backere, F; Vanhove, T; Dejonghe, E; Feys, M; Herinckx, T; Vankelecom, J; Decruyenaere, J; De Turck, F

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Handheld computers, such as tablets and smartphones, are becoming more and more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care. The platform is designed based on a 3-tier architecture with a focus on extensibility, scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability. A prototype of the platform was thoroughly evaluated. The scalability, performance and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices. The platform provides a scalable and responsive solution to enable the efficient switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range.

  5. Student experiences across multiple flipped courses in a single curriculum.

    PubMed

    Khanova, Julia; Roth, Mary T; Rodgers, Jo Ellen; McLaughlin, Jacqueline E

    2015-10-01

    The flipped classroom approach has garnered significant attention in health professions education, which has resulted in calls for curriculum-wide implementations of the model. However, research to support the development of evidence-based guidelines for large-scale flipped classroom implementations is lacking. This study was designed to examine how students experience the flipped classroom model of learning in multiple courses within a single curriculum, as well as to identify specific elements of flipped learning that students perceive as beneficial or challenging. A qualitative analysis of students' comments (n = 6010) from mid-course and end-of-course evaluations of 10 flipped courses (in 2012-2014) was conducted. Common and recurring themes were identified through systematic iterative coding and sorting using the constant comparison method. Multiple coders, agreement through consensus and member checking were utilised to ensure the trustworthiness of findings. Several themes emerged from the analysis: (i) the perceived advantages of flipped learning coupled with concerns about implementation; (ii) the benefits of pre-class learning and factors that negatively affect these benefits, such as quality and quantity of learning materials, as well as overall increase in workload, especially in the context of multiple concurrent flipped courses; (iii) the role of the instructor in the flipped learning environment, particularly in engaging students in active learning and ensuring instructional alignment, and (iv) the need for assessments that emphasise the application of knowledge and critical thinking skills. Analysis of data from 10 flipped courses provided insight into common patterns of student learning experiences specific to the flipped learning model within a single curriculum. The study points to the challenges associated with scaling the implementation of the flipped classroom across multiple courses. Several core elements critical to the effective design and implementation of the flipped classroom model are identified. © 2015 John Wiley & Sons Ltd.

  6. Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena

    DTIC Science & Technology

    2009-01-30

    results are scaled as floating point operations per second, obtained by counting the number of floating point additions and multiplications in the...black horizontal line. Perhaps the most striking feature at first is the fact that the memory bandwidth measured for flux lifting transcends this...theoretical peak performance values. For a suitable CPU-limited workload, this means that a single workstation equipped with multiple GPUs can do work that

  7. Knowledge, data and interests: Challenges in participation of diverse stakeholders in HIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negev, Maya, E-mail: negevm@bgu.ac.il

    2012-02-15

    Stakeholder participation is considered an integral part of HIA. However, the challenges that participation implies in a multi-disciplinary and multi-ethnic society are less studied. This paper presents the manifestations of the multiplicity of sectors and population groups in HIA and discusses the challenges that such diversity imposes. Specifically, there is no common ground between participants, as their positions entail contradictory knowledge regarding the current situation, reliance on distinct data and conflicting interests. This entails usage of multiple professional and ethnic languages, disagreements regarding the definition of health and prioritizing health issues in HIA, and divergent perceptions of risk. These differencesmore » between participants are embedded culturally, socially, individually and, maybe most importantly, professionally. This complex picture of diverse stakeholder attributes is grounded in a case study of stakeholder participation in HIA, regarding zoning of a hazardous industry site in Israel. The implication is that participatory HIAs should address the multiplicity of stakeholders and types of knowledge, data and interests in a more comprehensive way. - Highlights: Black-Right-Pointing-Pointer This paper analyses challenges in participation of diverse stakeholders in HIA. Black-Right-Pointing-Pointer The multiplicity of disciplines and population groups raises fundamental challenges. Black-Right-Pointing-Pointer Stakeholders possess distinct and often contradictory knowledge, data and interests. Black-Right-Pointing-Pointer They speak different languages, and differ on approaches to health and risk perceptions. Black-Right-Pointing-Pointer Substantial amendments to diverse participation are needed, in HIA and generally.« less

  8. Airspace Designations and Reporting Points (1997)

    DOT National Transportation Integrated Search

    1997-09-10

    This order, published yearly, provides a listing of all airspace designations : and reporting points, and pending amendments to those designations and reporting : points, established by the Federal Aviation Administration (FAA) under the : authority ...

  9. Architecting Learning Continuities for Families Across Informal Science Experiences

    NASA Astrophysics Data System (ADS)

    Perin, Suzanne Marie

    By first recognizing the valuable social and scientific practices taking place within families as they learn science together across multiple, everyday settings, this dissertation addresses questions of how to design and scaffold activities that build and expand on those practices to foster a deep understanding of science, and how the aesthetic experience of learning science builds connections across educational settings. Families were invited to visit a natural history museum, an aquarium, and a place or activity of the family's choice that they associated with science learning. Some families were asked to use a set of activities during their study visits based on the practices of science (National Research Council, 2012), which were delivered via smartphone app or on paper cards. I use design-based research, video data analysis and interaction analysis to examine how families build connections between informal science learning settings. Chapter 2 outlines the research-based design process of creating activities for families that fostered connections across multiple learning settings, regardless of the topical content of those settings. Implications of this study point to means for linking everyday family social practices such as questioning, observing, and disagreeing to the practices of science through activities that are not site-specific. The next paper delves into aesthetic experience of science learning, and I use video interaction analysis and linguistic analysis to show how notions of beauty and pleasure (and their opposites) are perfused throughout learning activity. Designing for aesthetic experience overtly -- building on the sensations of enjoyment and pleasure in the learning experience -- can motivate those who might feel alienated by the common conception of science as merely a dispassionate assembly of facts, discrete procedures or inaccessible theory. The third paper, a case study of a family who learns about salmon in each of the sites they visit, highlights the contributions of multiple sites of learning in an ecological view of learning. Finally, the dissertations' conclusion highlights the broad implications for conceiving of the many varied learning settings in a community as an educational infrastructure, and reflections on using aesthetic experience for broadening participation the sciences through the design of informal environments.

  10. Multiple Intelligences for Differentiated Learning

    ERIC Educational Resources Information Center

    Williams, R. Bruce

    2007-01-01

    There is an intricate literacy to Gardner's multiple intelligences theory that unlocks key entry points for differentiated learning. Using a well-articulated framework, rich with graphic representations, Williams provides a comprehensive discussion of multiple intelligences. He moves the teacher and students from curiosity, to confidence, to…

  11. Integrated controls-structures design methodology development for a class of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Joshi, S. M.; Walz, J. E.; Armstrong, E. S.

    1990-01-01

    Future utilization of space will require large space structures in low-Earth and geostationary orbits. Example missions include: Earth observation systems, personal communication systems, space science missions, space processing facilities, etc., requiring large antennas, platforms, and solar arrays. The dimensions of such structures will range from a few meters to possibly hundreds of meters. For reducing the cost of construction, launching, and operating (e.g., energy required for reboosting and control), it will be necessary to make the structure as light as possible. However, reducing structural mass tends to increase the flexibility which would make it more difficult to control with the specified precision in attitude and shape. Therefore, there is a need to develop a methodology for designing space structures which are optimal with respect to both structural design and control design. In the current spacecraft design practice, it is customary to first perform the structural design and then the controller design. However, the structural design and the control design problems are substantially coupled and must be considered concurrently in order to obtain a truly optimal spacecraft design. For example, let C denote the set of the 'control' design variables (e.g., controller gains), and L the set of the 'structural' design variables (e.g., member sizes). If a structural member thickness is changed, the dynamics would change which would then change the control law and the actuator mass. That would, in turn, change the structural model. Thus, the sets C and L depend on each other. Future space structures can be roughly divided into four mission classes. Class 1 missions include flexible spacecraft with no articulated appendages which require fine attitude pointing and vibration suppression (e.g., large space antennas). Class 2 missions consist of flexible spacecraft with articulated multiple payloads, where the requirement is to fine-point the spacecraft and each individual payload while suppressing the elastic motion. Class 3 missions include rapid slewing of spacecraft without appendages, while Class 4 missions include general nonlinear motion of a flexible spacecraft with articulated appendages and robot arms. Class 1 and 2 missions represent linear mathematical modeling and control system design problems (except for actuator and sensor nonlinearities), while Class 3 and 4 missions represent nonlinear problems. The development of an integrated controls/structures design approach for Class 1 missions is addressed. The performance for these missions is usually specified in terms of (1) root mean square (RMS) pointing errors at different locations on the structure, and (2) the rate of decay of the transient response. Both of these performance measures include the contributions of rigid as well as elastic motion.

  12. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE PAGES

    Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...

    2017-04-01

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  13. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacelli, Giorgio; Coe, Ryan; Patterson, David

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  14. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  15. Advanced data management design for autonomous telerobotic systems in space using spaceborne symbolic processors

    NASA Technical Reports Server (NTRS)

    Goforth, Andre

    1987-01-01

    The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reasoning. The third issue is hardware that supports symbolic processing in conjunction with standard data I/O and numeric processing. A SSP that currently is seen to be technologically feasible and is being developed is described and used as a baseline in the design concept.

  16. Association between education and future leisure-time physical inactivity: a study of Finnish twins over a 35-year follow-up.

    PubMed

    Piirtola, Maarit; Kaprio, Jaakko; Kujala, Urho M; Heikkilä, Kauko; Koskenvuo, Markku; Svedberg, Pia; Silventoinen, Karri; Ropponen, Annina

    2016-08-04

    Education is associated with health related lifestyle choices including leisure-time physical inactivity. However, the longitudinal associations between education and inactivity merit further studies. We investigated the association between education and leisure-time physical inactivity over a 35-year follow-up with four time points controlling for multiple covariates including familial confounding. This study of the population-based Finnish Twin Cohort consisted of 5254 twin individuals born in 1945-1957 (59 % women), of which 1604 were complete same-sexed twin pairs. Data on leisure-time physical activity and multiple covariates was available from four surveys conducted in 1975, 1981, 1990 and 2011 (response rates 72 to 89 %). The association between years of education and leisure-time physical inactivity (<1.5 metabolic equivalent hours/day) was first analysed for each survey. Then, the role of education was investigated for 15-year and 35-year inactivity periods in the longitudinal analyses. The co-twin control design was used to analyse the potential familial confounding of the effects. All analyses were conducted with and without multiple covariates. Odds Ratios (OR) with 95 % Confidence Intervals (CI) were calculated using logistic and conditional (fixed-effects) regression models. Each additional year of education was associated with less inactivity (OR 0.94 to 0.95, 95 % CI 0.92, 0.99) in the cross-sectional age- and sex-adjusted analyses. The associations of education with inactivity in the 15- and 35-year follow-ups showed a similar trend: OR 0.97 (95 % CI 0.93, 1.00) and OR 0.94 (95 % CI 0.91, 0.98), respectively. In all co-twin control analyses, each year of higher education was associated with a reduced likelihood of inactivity suggesting direct effect (i.e. independent from familial confounding) of education on inactivity. However, the point estimates were lower than in the individual-level analyses. Adjustment for multiple covariates did not change these associations. Higher education is associated with lower odds of leisure-time physical inactivity during the three-decade follow-up. The association was found after adjusting for several confounders, including familial factors. Hence, the results point to the conclusion that education has an independent role in the development of long-term physical inactivity and tailored efforts to promote physical activity among lower educated people would be needed throughout adulthood.

  17. Designing multi-reservoir system designs via efficient water-energy-food nexus trade-offs - Selecting new hydropower dams for the Blue Nile and Nepal's Koshi Basin

    NASA Astrophysics Data System (ADS)

    Harou, J. J.; Hurford, A.; Geressu, R. T.

    2015-12-01

    Many of the world's multi-reservoir water resource systems are being considered for further development of hydropower and irrigation aiming to meet economic, political and ecological goals. Complex river basins serve many needs so how should the different proposed groupings of reservoirs and their operations be evaluated? How should uncertainty about future supply and demand conditions be factored in? What reservoir designs can meet multiple goals and perform robustly in a context of global change? We propose an optimized multi-criteria screening approach to identify best performing designs, i.e., the selection, size and operating rules of new reservoirs within multi-reservoir systems in a context of deeply uncertain change. Reservoir release operating rules and storage sizes are optimized concurrently for each separate infrastructure design under consideration across many scenarios representing plausible future conditions. Outputs reveal system trade-offs using multi-dimensional scatter plots where each point represents an approximately Pareto-optimal design. The method is applied to proposed Blue Nile River reservoirs in Ethiopia, where trade-offs between capital costs, total and firm energy output, aggregate storage and downstream irrigation and energy provision for the best performing designs are evaluated. The impact of filling period for large reservoirs is considered in a context of hydrological uncertainty. The approach is also applied to the Koshi basin in Nepal where combinations of hydropower storage and run-of-river dams are being considered for investment. We show searching for investment portfolios that meet multiple objectives provides stakeholders with a rich view on the trade-offs inherent in the nexus and how different investment bundles perform differently under plausible futures. Both case-studies show how the proposed approach helps explore and understand the implications of investing in new dams in a global change context.

  18. Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease

    PubMed Central

    Jie, Biao; Liu, Mingxia; Liu, Jun

    2016-01-01

    Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313

  19. CMOS imager for pointing and tracking applications

    NASA Technical Reports Server (NTRS)

    Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)

    2006-01-01

    Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.

  20. A novel PON-based mobile distributed cluster of antennas approach to provide impartial and broadband services to end users

    NASA Astrophysics Data System (ADS)

    Sana, Ajaz; Saddawi, Samir; Moghaddassi, Jalil; Hussain, Shahab; Zaidi, Syed R.

    2010-01-01

    In this research paper we propose a novel Passive Optical Network (PON) based Mobile Worldwide Interoperability for Microwave Access (WiMAX) access network architecture to provide high capacity and performance multimedia services to mobile WiMAX users. Passive Optical Networks (PON) networks do not require powered equipment; hence they cost lower and need less network management. WiMAX technology emerges as a viable candidate for the last mile solution. In the conventional WiMAX access networks, the base stations and Multiple Input Multiple Output (MIMO) antennas are connected by point to point lines. Ideally in theory, the Maximum WiMAX bandwidth is assumed to be 70 Mbit/s over 31 miles. In reality, WiMAX can only provide one or the other as when operating over maximum range, bit error rate increases and therefore it is required to use lower bit rate. Lowering the range allows a device to operate at higher bit rates. Our focus in this research paper is to increase both range and bit rate by utilizing distributed cluster of MIMO antennas connected to WiMAX base stations with PON based topologies. A novel quality of service (QoS) algorithm is also proposed to provide admission control and scheduling to serve classified traffic. The proposed architecture presents flexible and scalable system design with different performance requirements and complexity.

  1. Path scheduling for multiple mobile actors in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Trapasiya, Samir D.; Soni, Himanshu B.

    2017-05-01

    In wireless sensor network (WSN), energy is the main constraint. In this work we have addressed this issue for single as well as multiple mobile sensor actor network. In this work, we have proposed Rendezvous Point Selection Scheme (RPSS) in which Rendezvous Nodes are selected by set covering problem approach and from that, Rendezvous Points are selected in a way to reduce the tour length. The mobile actors tour is scheduled to pass through those Rendezvous Points as per Travelling Salesman Problem (TSP). We have also proposed novel rendezvous node rotation scheme for fair utilisation of all the nodes. We have compared RPSS with Stationery Actor scheme as well as RD-VT, RD-VT-SMT and WRP-SMT for performance metrics like energy consumption, network lifetime, route length and found the better outcome in all the cases for single actor. We have also applied RPSS for multiple mobile actor case like Multi-Actor Single Depot (MASD) termination and Multi-Actor Multiple Depot (MAMD) termination and observed by extensive simulation that MAMD saves the network energy in optimised way and enhance network lifetime compared to all other schemes.

  2. Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.

  3. EV space suit gloves (passive)

    NASA Technical Reports Server (NTRS)

    Fletcher, E. G.; Dodson, J. D.; Elkins, W.; Tickner, E. G.

    1975-01-01

    A pair of pressure and thermal insulating overgloves to be used with an Extravehicular (EV) suit assembly was designed, developed, fabricated, and tested. The design features extensive use of Nomex felt materials in lieu of the multiple layer insulation formerly used with the Apollo thermal glove. The glove theoretically satisfies all of the thermal requirements. The presence of the thermal glove does not degrade pressure glove tactility by more than the acceptable 10% value. On the other hand, the thermal glove generally degrades pressure glove mobility by more than the acceptable 10% value, primarily in the area of the fingers. Life cycling tests were completed with minimal problems. The thermal glove/pressure glove ensemble was also tested for comfort; the test subjects found no problems with the thermal glove although they did report difficulties with pressure points on the pressure glove which were independent of the thermal glove.

  4. A workout for virtual bodybuilders (design issues for embodiment in multi-actor virtual environments)

    NASA Technical Reports Server (NTRS)

    Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave

    1994-01-01

    This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.

  5. Free-space laser communication technologies III; Proceedings of the Meeting, Los Angeles, CA, Jan. 21, 22, 1991

    NASA Technical Reports Server (NTRS)

    Begley, David L. (Editor); Seery, Bernard D. (Editor)

    1991-01-01

    The present volume on free-space laser communication technologies discusses system analysis, performance, and applications, pointing, acquisition, and tracking in beam control, laboratory demonstration systems, and transmitter and critical component technologies. Attention is given to a space station laser communication transceiver, meeting intersatellite links mission requirements by an adequate optical terminal design, an optical approach to proximity-operations communications for Space Station Freedom, and optical space-to-ground link availability assessment and diversity requirements. Topics addressed include nonmechanical steering of laser beams by multiple aperture antennas, a free-space simulator for laser transmission, heterodyne acquisition and tracking in a free-space diode laser link, and laser terminal attitude determination via autonomous star tracking. Also discussed are stability considerations in relay lens design for optical communications, liquid crystals for lasercom applications, and narrowband optical interference filters.

  6. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    PubMed

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  7. Design and efficacy of an Ecohealth competency-based course on the prevention and control of vector diseases in Latin America.

    PubMed

    Magaña-Valladares, Laura; Rodríguez, Mario Henry; Betanzos-Reyes, Ángel Francisco; Riojas-Rodríguez, Horacio; Quezada-Jiménez, María Laura; Suárez-Conejero, Juana Elvira; Lamadrid-Figueroa, Héctor

    2018-01-01

    To design and analyze the efficacy of an Ecohealth competency-based course on the prevention and control of vector-borne-diseases for specific stakeholders. Multiple stakeholders and sectors of the region were consulted to identify Ecohealth group-specific competencies using an adjusted analysis matrix. Eight courses based on the competencies were implemented to train EA tutors. The effectiveness of the course was evaluated through the use of paired- t-tests by intervention group. Strategic, tactical, academia and community stakeholder groups and their competencies were identified. An overall gain of 43 percentage points (p<0.001) was observed in terms of competencies score in trained tutors, which further trained 1 033 people. The identification of the stakeholders and their competencies proved to be useful to guide training courses to significantly improve the initial competencies and create a critical mass to further advance the EA in the region.

  8. Thin plastic foil X-ray optics with spiral geometry

    NASA Astrophysics Data System (ADS)

    Barbera, Marco; Mineo, Teresa; Perinati, Emanuele; Schnopper, Herbert W.; Taibi, Angelo

    2007-09-01

    Winding a plastic foil ribbon into spiral cylinder or spiral cones we can design and build single or multiple reflection X-ray grazing incidence focusing optics with potential applications in Astronomy as well as experimental physics. The use of thin plastic foils from common industrial applications and of a mounting technique which does not require the construction of mandrels make these optics very cost effective. A spiral geometry focusing optic produces an annular image of a point source with the angular size of the annulus depending mainly on the pitch of the winding and the focal length. We use a ray-tracing code to evaluate the performances of cylindrical, and double conical spiral geometry as a function of the design parameters e.g. focal length, diameter, optic length. Some preliminary results are presented on X-ray imaging tests performed on spiral cylindrical optics.

  9. Free-space laser communication technologies IV; Proceedings of the 4th Conference, Los Angeles, CA, Jan. 23, 24, 1992

    NASA Technical Reports Server (NTRS)

    Begley, David L. (Editor); Seery, Bernard D. (Editor)

    1992-01-01

    Papers included in this volume are grouped under topics of receivers; laser transmitters; components; system analysis, performance, and applications; and beam control (pointing, acquisition, and tracking). Papers are presented on an experimental determination of power penalty contributions in an optical Costas-type phase-locked loop receiver, a resonant laser receiver for free-space laser communications, a simple low-loss technique for frequency-locking lasers, direct phase modulation of laser diodes, and a silex beacon. Particular attention is given to experimental results on an optical array antenna for nonmechanical beam steering, a potassium Faraday anomalous dispersion optical filter, a 100-Mbps resonant cavity phase modulator for coherent optical communications, a numerical simulation of a 325-Mbit/s QPPM optical communication system, design options for an optical multiple-access data relay terminal, CCD-based optical tracking loop design trades, and an analysis of a spatial-tracking subsystem for optical communications.

  10. Evolving serodiagnostics by rationally designed peptide arrays: the Burkholderia paradigm in Cystic Fibrosis

    NASA Astrophysics Data System (ADS)

    Peri, Claudio; Gori, Alessandro; Gagni, Paola; Sola, Laura; Girelli, Daniela; Sottotetti, Samantha; Cariani, Lisa; Chiari, Marcella; Cretich, Marina; Colombo, Giorgio

    2016-09-01

    Efficient diagnosis of emerging and novel bacterial infections is fundamental to guide decisions on therapeutic treatments. Here, we engineered a novel rational strategy to design peptide microarray platforms, which combines structural and genomic analyses to predict the binding interfaces between diverse protein antigens and antibodies against Burkholderia cepacia complex infections present in the sera of Cystic Fibrosis (CF) patients. The predicted binding interfaces on the antigens are synthesized in the form of isolated peptides and chemically optimized for controlled orientation on the surface. Our platform displays multiple Burkholderia-related epitopes and is shown to diagnose infected individuals even in presence of superinfections caused by other prevalent CF pathogens, with limited cost and time requirements. Moreover, our data point out that the specific patterns determined by combined probe responses might provide a characterization of Burkholderia infections even at the subtype level (genomovars). The method is general and immediately applicable to other bacteria.

  11. Life Testing and Diagnostics of a Planar Out-of-Core Thermionic Converter

    NASA Astrophysics Data System (ADS)

    Thayer, Kevin L.; Ramalingam, Mysore L.; Young, Timothy J.; Lamp, Thomas R.

    1994-07-01

    This paper details the design and performance of an automated computer data acquisition system for a planar, out-of-core thermionic converter with CVD rhenium electrodes. The output characteristics of this converter have been mapped for emitter temperatures ranging from approximately 1700K to 2000K, and life testing of the converter is presently being performed at the design point of operation. An automated data acquisition system has been constructed to facilitate the collection of current density versus output voltage (J-V) and temperature data from the converter throughout the life test. This system minimizes the amount of human interaction necessary during the lifetest to measure and archive the data and present it in a usable form. The task was accomplished using a Macintosh Ilcx computer, two multiple-purpose interface boards, a digital oscilloscope, a sweep generator, and National Instrument's LabVIEW application software package.

  12. Joint Optimization of Distribution Network Design and Two-Echelon Inventory Control with Stochastic Demand and CO2 Emission Tax Charges.

    PubMed

    Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun

    2017-01-01

    This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer.

  13. Multiple positive solutions for a class of integral inclusions

    NASA Astrophysics Data System (ADS)

    Hong, Shihuang

    2008-04-01

    This paper deals with sufficient conditions for the existence of at least two positive solutions for a class of integral inclusions arising in the traffic theory. To show our main results, we apply a norm-type expansion and compression fixed point theorem for multivalued map due to Agarwal and O'Regan [A note on the existence of multiple fixed points for multivalued maps with applications, J. Differential Equation 160 (2000) 389-403].

  14. Evaluation of multiple emission point facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miltenberger, R.P.; Hull, A.P.; Strachan, S.

    In 1970, the New York State Department of Environmental Conservation (NYSDEC) assumed responsibility for the environmental aspect of the state's regulatory program for by-product, source, and special nuclear material. The major objective of this study was to provide consultation to NYSDEC and the US NRC to assist NYSDEC in determining if broad-based licensed facilities with multiple emission points were in compliance with NYCRR Part 380. Under this contract, BNL would evaluate a multiple emission point facility, identified by NYSDEC, as a case study. The review would be a nonbinding evaluation of the facility to determine likely dispersion characteristics, compliance withmore » specified release limits, and implementation of the ALARA philosophy regarding effluent release practices. From the data collected, guidance as to areas of future investigation and the impact of new federal regulations were to be developed. Reported here is the case study for the University of Rochester, Strong Memorial Medical Center and Riverside Campus.« less

  15. Multiple points of equilibrium for active magnetic regenerators using first order magnetocaloric material

    NASA Astrophysics Data System (ADS)

    Niknia, I.; Trevizoli, P. V.; Govindappa, P.; Christiaanse, T. V.; Teyber, R.; Rowe, A.

    2018-05-01

    First order transition material (FOM) usually exhibits magnetocaloric effects in a narrow temperature range which complicates their use in an active magnetic regenerator (AMR) refrigerator. In addition, the magnetocaloric effect in first order materials can vary with field and temperature history of the material. This study examines the behavior of a MnFe(P,Si) FOM sample in an AMR cycle using a numerical model and experimental measurements. For certain operating conditions, multiple points of equilibrium (MPE) exist for a fixed hot rejection temperature. Stable and unstable points of equilibriums (PEs) are identified and the impacts of heat loads, operating conditions, and configuration losses on the number of PEs are discussed. It is shown that the existence of multiple PEs can affect the performance of an AMR significantly for certain operating conditions. In addition, the points where MPEs exist appear to be linked to the device itself, not just the material, suggesting the need to layer a regenerator in a way that avoids MPE conditions and to layer with a specific device in mind.

  16. Application of two procedures for dual-point design of transonic airfoils

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Campbell, Richard L.; Allison, Dennis O.

    1994-01-01

    Two dual-point design procedures were developed to reduce the objective function of a baseline airfoil at two design points. The first procedure to develop a redesigned airfoil used a weighted average of the shapes of two intermediate airfoils redesigned at each of the two design points. The second procedure used a weighted average of two pressure distributions obtained from an intermediate airfoil redesigned at each of the two design points. Each procedure was used to design a new airfoil with reduced wave drag at the cruise condition without increasing the wave drag or pitching moment at the climb condition. Two cycles of the airfoil shape-averaging procedure successfully designed a new airfoil that reduced the objective function and satisfied the constraints. One cycle of the target (desired) pressure-averaging procedure was used to design two new airfoils that reduced the objective function and came close to satisfying the constraints.

  17. Reliability and equivalence of alternate forms for the Symbol Digit Modalities Test: implications for multiple sclerosis clinical trials.

    PubMed

    Benedict, Ralph H B; Smerbeck, Audrey; Parikh, Rajavi; Rodgers, Jonathan; Cadavid, Diego; Erlanger, David

    2012-09-01

    Cognitive impairment is common in multiple sclerosis (MS), but is seldom assessed in clinical trials investigating the effects of disease-modifying therapies. The Symbol Digit Modalities Test (SDMT) is a particularly promising tool due to its sensitivity and robust correlation with brain magnetic resonance imaging (MRI) and vocational disability. Unfortunately, there are no validated alternate SDMT forms, which are needed to mitigate practice effects. The aim of the study was to assess the reliability and equivalence of SDMT alternate forms. Twenty-five healthy participants completed each of five alternate versions of the SDMT - the standard form, two versions from the Rao Brief Repeatable Battery, and two forms specifically designed for this study. Order effects were controlled using a Latin-square research design. All five versions of the SDMT produced mean values within 3 raw score points of one another. Three forms were very consistent, and not different by conservative statistical tests. The SDMT test-retest reliability using these forms was good to excellent, with all r values exceeding 0.80. For the first time, we find good evidence that at least three alternate versions of the SDMT are of equivalent difficulty in healthy adults. The forms are reliable, and can be implemented in clinical trials emphasizing cognitive outcomes.

  18. Performance of a multiple venturi fuel-air preparation system. [fuel injection for gas turbines

    NASA Technical Reports Server (NTRS)

    Tacina, R. R.

    1979-01-01

    Spatial fuel-air distributions, degree of vaporization, and pressure drop were measured 16.5 cm downstream of the fuel injection plane of a multiple Venturi tube fuel injector. Tests were performed in a 12 cm tubular duct. Test conditions were: a pressure of 0.3 MPa, inlet air temperature from 400 to 800K, air velocities of 10 and 20 m/s, and fuel-air ratios of 0.010 and 0.020. The fuel was Diesel #2. Spatial fuel-air distributions were within + or - 20 percent of the mean at inlet air temperatures above 450K. At an inlet air temperature of 400K, the fuel-air distribution was measured when a 50 percent blockage plate was placed 9.2 cm upstream of the fuel injection plane to distort the inlet air velocity fuel injection plane to distort the inlet air velocity profile. Vaporization of the fuel was 50 percent complete at an inlet air temperature of 400K and the percentage increased linearly with temperature to complete vaporization at 600K. The pressure drop was 3 percent at the design point which was three times greater than the designed value and the single tube experiment value. No autoignition or flashback was observed at the conditions tested.

  19. Montblanc1: GPU accelerated radio interferometer measurement equations in support of Bayesian inference for radio observations

    NASA Astrophysics Data System (ADS)

    Perkins, S. J.; Marais, P. C.; Zwart, J. T. L.; Natarajan, I.; Tasse, C.; Smirnov, O.

    2015-09-01

    We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. χ2 values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model. As most of the elements of the RIME and χ2 calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple χ2 values. Modified model parameters are transferred to the GPU between each iteration. We implemented Montblanc as a Python package based upon NVIDIA's CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc's RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MEQTREES on a dual hexacore Intel E5-2620v2 CPU. Compared to the OSKAR simulator's GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision floating point respectively. However, OSKAR's RIME implementation is more general than Montblanc's BIRO-tailored RIME. Theoretical analysis of Montblanc's dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 caches.

  20. Designing Interactive Multimedia Instruction to Address Soldiers’ Learning Needs

    DTIC Science & Technology

    2014-12-01

    A point of need design seeks to identify and meet specific learning needs. It does so by focusing on the learning needs of an identified group ...instructional design and tailored training techniques to address the Army Learning Model (ALM) point of need concept. The point of need concept focuses both on ...developing six IMI exemplars focused on point of need training, including three variations of needs-focused designs : familiarization, core, and tailored

Top