Venter, Anre; Maxwell, Scott E; Bolig, Erika
2002-06-01
Adding a pretest as a covariate to a randomized posttest-only design increases statistical power, as does the addition of intermediate time points to a randomized pretest-posttest design. Although typically 5 waves of data are required in this instance to produce meaningful gains in power, a 3-wave intensive design allows the evaluation of the straight-line growth model and may reduce the effect of missing data. The authors identify the statistically most powerful method of data analysis in the 3-wave intensive design. If straight-line growth is assumed, the pretest-posttest slope must assume fairly extreme values for the intermediate time point to increase power beyond the standard analysis of covariance on the posttest with the pretest as covariate, ignoring the intermediate time point.
Liu, Guang-Mao; Jin, Dong-Hai; Jiang, Xi-Hang; Zhou, Jian-Ye; Zhang, Yan; Chen, Hai-Bo; Hu, Sheng-Shou; Gui, Xing-Min
The ventricular assist pumps do not always function at the design point; instead, these pumps may operate at unfavorable off-design points. For example, the axial ventricular assist pump FW-2, in which the design point is 5 L/min flow rate against 100 mm Hg pressure increase at 8,000 rpm, sometimes works at off-design flow rates of 1 to 4 L/min. The hemolytic performance of the FW-2 at both the design point and at off-design points was estimated numerically and tested in vitro. Flow characteristics in the pump were numerically simulated and analyzed with special attention paid to the scalar sheer stress and exposure time. An in vitro hemolysis test was conducted to verify the numerical results. The simulation results showed that the scalar shear stress in the rotor region at the 1 L/min off-design point was 70% greater than at the 5 L/min design point. The hemolysis index at the 1 L/min off-design point was 3.6 times greater than at the 5 L/min design point. The in vitro results showed that the normalized index of hemolysis increased from 0.017 g/100 L at the 5 L/min design point to 0.162 g/100 L at the 1 L/min off-design point. The hemolysis comparison between the different blood pump flow rates will be helpful for future pump design point selection and will guide the usage of ventricular assist pumps. The hemolytic performance of the blood pump at the working point in the clinic should receive more focus.
On the design of a radix-10 online floating-point multiplier
NASA Astrophysics Data System (ADS)
McIlhenny, Robert D.; Ercegovac, Milos D.
2009-08-01
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
NASA Technical Reports Server (NTRS)
Oakley, Celia M.; Barratt, Craig H.
1990-01-01
Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
Precision pointing compensation for DSN antennas with optical distance measuring sensors
NASA Technical Reports Server (NTRS)
Scheid, R. E.
1989-01-01
The pointing control loops of Deep Space Network (DSN) antennas do not account for unmodeled deflections of the primary and secondary reflectors. As a result, structural distortions due to unpredictable environmental loads can result in uncompensated boresight shifts which degrade pointing accuracy. The design proposed here can provide real-time bias commands to the pointing control system to compensate for environmental effects on pointing performance. The bias commands can be computed in real time from optically measured deflections at a number of points on the primary and secondary reflectors. Computer simulations with a reduced-order finite-element model of a DSN antenna validate the concept and lead to a proposed design by which a ten-to-one reduction in pointing uncertainty can be achieved under nominal uncertainty conditions.
Lightning Simulation and Design Program (LSDP)
NASA Astrophysics Data System (ADS)
Smith, D. A.
This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.
Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Bowyer, Ted W.; Cameron, Ian M.
2015-10-01
The International Monitoring System contains up to 80 stations around the world that have aerosol and xenon monitoring systems designed to detect releases of radioactive materials to the atmosphere from nuclear tests. A rule of thumb description of plume concentration and duration versus time and distance from the release point is useful when designing and deploying new sample collection systems. This paper uses plume development from atmospheric transport modeling to provide a power-law rule describing atmospheric dilution factors as a function of distance from the release point.
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Spacecraft Station-Keeping Trajectory and Mission Design Tools
NASA Technical Reports Server (NTRS)
Chung, Min-Kun J.
2009-01-01
Two tools were developed for designing station-keeping trajectories and estimating delta-v requirements for designing missions to a small body such as a comet or asteroid. This innovation uses NPOPT, a non-sparse, general-purpose sequential quadratic programming (SQP) optimizer and the Two-Level Differential Corrector (T-LDC) in LTool (Libration point mission design Tool) to design three kinds of station-keeping scripts: vertical hovering, horizontal hovering, and orbiting. The T-LDC is used to differentially correct several trajectory legs that join hovering points. In a vertical hovering, the maximum and minimum range points must be connected smoothly while maintaining the spacecrafts range from a small body, all within the law of gravity and the solar radiation pressure. The same is true for a horizontal hover. A PatchPoint is an LTool class that denotes a space-time event with some extra information for differential correction, including a set of constraints to be satisfied by T-LDC. Given a set of PatchPoints, each with its own constraint, the T-LDC differentially corrects the entire trajectory by connecting each trajectory leg joined by PatchPoints while satisfying all specified constraints at the same time. Vertical and horizontal hover both are needed to minimize delta-v spent for station keeping. A Python I/F to NPOPT has been written to be used from an LTool script. In vertical hovering, the spacecraft stays along the line joining the Sun and a small body. An instantaneous delta-v toward the anti- Sun direction is applied at the closest approach to the small body for station keeping. For example, the spacecraft hovers between the minimum range (2 km) point and the maximum range (2.5 km) point from the asteroid 1989ML. Horizontal hovering buys more time for a spacecraft to recover if, for any reason, a planned thrust fails, by returning almost to the initial position after some time later via a near elliptical orbit around the small body. The mapping or staging orbit may be similarly generated using T-LDC with a set of constraints. Some delta-v tables are generated for several different asteroid masses.
The structural approach to shared knowledge: an application to engineering design teams.
Avnet, Mark S; Weigel, Annalisa L
2013-06-01
We propose a methodology for analyzing shared knowledge in engineering design teams. Whereas prior work has focused on shared knowledge in small teams at a specific point in time, the model presented here is both scalable and dynamic. By quantifying team members' common views of design drivers, we build a network of shared mental models to reveal the structure of shared knowledge at a snapshot in time. Based on a structural comparison of networks at different points in time, a metric of change in shared knowledge is computed. Analysis of survey data from 12 conceptual space mission design sessions reveals a correlation between change in shared knowledge and each of several system attributes, including system development time, system mass, and technological maturity. From these results, we conclude that an early period of learning and consensus building could be beneficial to the design of engineered systems. Although we do not examine team performance directly, we demonstrate that shared knowledge is related to the technical design and thus provide a foundation for improving design products by incorporating the knowledge and thoughts of the engineering design team into the process.
Dual keel Space Station payload pointing system design and analysis feasibility study
NASA Technical Reports Server (NTRS)
Smagala, Tom; Class, Brian F.; Bauer, Frank H.; Lebair, Deborah A.
1988-01-01
A Space Station attached Payload Pointing System (PPS) has been designed and analyzed. The PPS is responsible for maintaining fixed payload pointing in the presence of disturbance applied to the Space Station. The payload considered in this analysis is the Solar Optical Telescope. System performance is evaluated via digital time simulations by applying various disturbance forces to the Space Station. The PPS meets the Space Station articulated pointing requirement for all disturbances except Shuttle docking and some centrifuge cases.
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Csank, Jeffrey
2015-01-01
Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95 percent response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible controller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed-loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76 percent. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76 percent. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Csank, Jeffrey T.
2015-01-01
Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95% response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible con- troller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed- loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76%. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76%. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.
2013-01-01
Background Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. Methods/design This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. Discussion This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight. PMID:24350728
Bellera, C A; Penel, N; Ouali, M; Bonvalot, S; Casali, P G; Nielsen, O S; Delannes, M; Litière, S; Bonnetain, F; Dabakuyo, T S; Benjamin, R S; Blay, J-Y; Bui, B N; Collin, F; Delaney, T F; Duffaud, F; Filleron, T; Fiore, M; Gelderblom, H; George, S; Grimer, R; Grosclaude, P; Gronchi, A; Haas, R; Hohenberger, P; Issels, R; Italiano, A; Jooste, V; Krarup-Hansen, A; Le Péchoux, C; Mussi, C; Oberlin, O; Patel, S; Piperno-Neumann, S; Raut, C; Ray-Coquard, I; Rutkowski, P; Schuetze, S; Sleijfer, S; Stoeckle, E; Van Glabbeke, M; Woll, P; Gourgou-Bourgade, S; Mathoulin-Pélissier, S
2015-05-01
The use of potential surrogate end points for overall survival, such as disease-free survival (DFS) or time-to-treatment failure (TTF) is increasingly common in randomized controlled trials (RCTs) in cancer. However, the definition of time-to-event (TTE) end points is rarely precise and lacks uniformity across trials. End point definition can impact trial results by affecting estimation of treatment effect and statistical power. The DATECAN initiative (Definition for the Assessment of Time-to-event End points in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for RCT in sarcomas and gastrointestinal stromal tumors (GIST). We first carried out a literature review to identify TTE end points (primary or secondary) reported in publications of RCT. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points. Recommendations were developed through a validated consensus method formalizing the degree of agreement among experts. Recommended guidelines for the definition of TTE end points commonly used in RCT for sarcomas and GIST are provided for adjuvant and metastatic settings, including DFS, TTF, time to progression and others. Use of standardized definitions should facilitate comparison of trials' results, and improve the quality of trial design and reporting. These guidelines could be of particular interest to research scientists involved in the design, conduct, reporting or assessment of RCT such as investigators, statisticians, reviewers, editors or regulatory authorities. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
1984-01-01
A solar pond electric power generation subsystem, an electric power transformer and switch yard, a large solar pond, a water treatment plant, and numerous storage and evaporation ponds. Because a solar pond stores thermal energy over a long period of time, plant operation at any point in time is dependent upon past operation and future perceived generation plans. This time or past history factor introduces a new dimension in the design process. The design optimization of a plant must go beyond examination of operational state points and consider the seasonal variations in solar, solar pond energy storage, and desired plant annual duty-cycle profile. Models or design tools will be required to optimize a plant design. These models should be developed in order to include a proper but not excessive level of detail. The model should be targeted to a specific objective and not conceived as a do everything analysis tool, i.e., system design and not gradient-zone stability.
Pointo - a Low Cost Solution to Point Cloud Processing
NASA Astrophysics Data System (ADS)
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
NASA Technical Reports Server (NTRS)
Krebs, R. P.
1972-01-01
The computer program described calculates the design-point characteristics of a gas generator or a turbojet lift engine for V/STOL applications. The program computes the dimensions and mass, as well as the thermodynamic performance of the model engine and its components. The program was written in FORTRAN 4 language. Provision has been made so that the program accepts input values in either SI Units or U.S. Customary Units. Each engine design-point calculation requires less than 0.5 second of 7094 computer time.
Set membership experimental design for biological systems.
Marvel, Skylar W; Williams, Cranos M
2012-03-21
Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.
Set membership experimental design for biological systems
2012-01-01
Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240
Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
NASA Astrophysics Data System (ADS)
Wray, J. D.
2003-05-01
The robotic observatory telescope must point precisely on the target object, and then track autonomously to a fraction of the FWHM of the system PSF for durations of ten to twenty minutes or more. It must retain this precision while continuing to function at rates approaching thousands of observations per night for all its years of useful life. These stringent requirements raise new challenges unique to robotic telescope systems design. Critical design considerations are driven by the applicability of the above requirements to all systems of the robotic observatory, including telescope and instrument systems, telescope-dome enclosure systems, combined electrical and electronics systems, environmental (e.g. seeing) control systems and integrated computer control software systems. Traditional telescope design considerations include the effects of differential thermal strain, elastic flexure, plastic flexure and slack or backlash with respect to focal stability, optical alignment and angular pointing and tracking precision. Robotic observatory design must holistically encapsulate these traditional considerations within the overall objective of maximized long-term sustainable precision performance. This overall objective is accomplished through combining appropriate mechanical and dynamical system characteristics with a full-time real-time telescope mount model feedback computer control system. Important design considerations include: identifying and reducing quasi-zero-backlash; increasing size to increase precision; directly encoding axis shaft rotation; pointing and tracking operation via real-time feedback between precision mount model and axis mounted encoders; use of monolithic construction whenever appropriate for sustainable mechanical integrity; accelerating dome motion to eliminate repetitive shock; ducting internal telescope air to outside dome; and the principal design criteria: maximizing elastic repeatability while minimizing slack, plastic deformation and hysteresis to facilitate long-term repeatably precise pointing and tracking performance.
ERIC Educational Resources Information Center
Klotsche, Jens; Gloster, Andrew T.
2012-01-01
Longitudinal studies are increasingly common in psychological research. Characterized by repeated measurements, longitudinal designs aim to observe phenomena that change over time. One important question involves identification of the exact point in time when the observed phenomena begin to meaningfully change above and beyond baseline…
A travel time forecasting model based on change-point detection method
NASA Astrophysics Data System (ADS)
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Chaos and complexity by design
Roberts, Daniel A.; Yoshida, Beni
2017-04-20
We study the relationship between quantum chaos and pseudorandomness by developing probes of unitary design. A natural probe of randomness is the “frame poten-tial,” which is minimized by unitary k-designs and measures the 2-norm distance between the Haar random unitary ensemble and another ensemble. A natural probe of quantum chaos is out-of-time-order (OTO) four-point correlation functions. We also show that the norm squared of a generalization of out-of-time-order 2k-point correlators is proportional to the kth frame potential, providing a quantitative connection between chaos and pseudorandomness. In addition, we prove that these 2k-point correlators for Pauli operators completely determine the k-foldmore » channel of an ensemble of unitary operators. Finally, we use a counting argument to obtain a lower bound on the quantum circuit complexity in terms of the frame potential. This provides a direct link between chaos, complexity, and randomness.« less
How should Fitts' Law be applied to human-computer interaction?
NASA Technical Reports Server (NTRS)
Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.
1992-01-01
The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.
Change Point Detection in Correlation Networks
NASA Astrophysics Data System (ADS)
Barnett, Ian; Onnela, Jukka-Pekka
2016-01-01
Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.
Fast underdetermined BSS architecture design methodology for real time applications.
Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R
2015-01-01
In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.
14 CFR 25.519 - Jacking and tie-down provisions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... structure must be designed for a vertical load of 1.33 times the vertical static reaction at each jacking point acting singly and in combination with a horizontal load of 0.33 times the vertical static reaction...: (i) The airplane structure must be designed for a vertical load of 1.33 times the vertical reaction...
14 CFR 25.519 - Jacking and tie-down provisions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... structure must be designed for a vertical load of 1.33 times the vertical static reaction at each jacking point acting singly and in combination with a horizontal load of 0.33 times the vertical static reaction...: (i) The airplane structure must be designed for a vertical load of 1.33 times the vertical reaction...
14 CFR 25.519 - Jacking and tie-down provisions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... structure must be designed for a vertical load of 1.33 times the vertical static reaction at each jacking point acting singly and in combination with a horizontal load of 0.33 times the vertical static reaction...: (i) The airplane structure must be designed for a vertical load of 1.33 times the vertical reaction...
14 CFR 25.519 - Jacking and tie-down provisions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... structure must be designed for a vertical load of 1.33 times the vertical static reaction at each jacking point acting singly and in combination with a horizontal load of 0.33 times the vertical static reaction...: (i) The airplane structure must be designed for a vertical load of 1.33 times the vertical reaction...
14 CFR 25.519 - Jacking and tie-down provisions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... structure must be designed for a vertical load of 1.33 times the vertical static reaction at each jacking point acting singly and in combination with a horizontal load of 0.33 times the vertical static reaction...: (i) The airplane structure must be designed for a vertical load of 1.33 times the vertical reaction...
Real-time seam tracking control system based on line laser visions
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi
2018-07-01
A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.
The vectorization of a ray tracing program for image generation
NASA Technical Reports Server (NTRS)
Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.
1984-01-01
Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.
The Importance and Role of Intracluster Correlations in Planning Cluster Trials
Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark
2008-01-01
There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427
The Euclid AOCS science mode design
NASA Astrophysics Data System (ADS)
Bacchetta, A.; Saponara, M.; Torasso, A.; Saavedra Criado, G.; Girouart, B.
2015-06-01
Euclid is a Medium-Class mission of the ESA Cosmic Vision 2015-2025 plan. Thales Alenia Space Italy has been selected as prime contractor for the Euclid design and implementation. The spacecraft will be launched in 2020 on a Soyuz launch vehicle from Kourou, to a large-amplitude orbit around the sun-earth libration point L2. The objective of Euclid is to understand the origin of the Universe's accelerating expansion, by mapping large-scale structure over a cosmic time covering the last 10 billion years. The mission requires the ability to survey a large fraction of the extragalactic sky (i.e. portion of sky with latitude higher than 30 deg with respect to galactic plane) over its lifetime, with very high system stability (telescope, focal plane, spacecraft pointing) to minimize systematic effects. The AOCS is a key element to meet the scientific requirements. The AOCS design drivers are pointing performance and image quality (Relative Pointing Error over 700 s less than 25 m as, 68 % confidence level), and minimization of slew time between observation fields to meet the goal of completing the Wide Extragalactic Survey in 6 years. The first driver demands a Fine Guidance Sensor in the telescope focal plane for accurate attitude measurement and actuators with low noise and fine command resolution. The second driver requires high-torque actuators and an extended attitude control bandwidth. In the design, reaction wheels (RWL) and cold-gas micro-propulsion (MPS) are used in a synergetic and complementary way during different operational phases of the science mode. The RWL are used for performing the field slews, whereas during scientific observation they are stopped not to perturb the pointing by additional mechanical noise. The MPS is used for maintaining the reference attitude with high pointing accuracy during the scientific observation. This unconventional concept achieves the pointing performance with the shortest maneuver times, with significant mass savings with respect to the MPS-only solution.
Shadish, William R; Rindskopf, David M; Boyajian, Jonathan G
2016-08-01
We reanalyzed data from a previous randomized crossover design that administered high or low doses of intravenous immunoglobulin (IgG) to 12 patients with hypogammaglobulinaemia over 12 time points, with crossover after time 6. The objective was to see if results corresponded when analyzed as a set of single-case experimental designs vs. as a usual randomized controlled trial (RCT). Two blinded statisticians independently analyzed results. One analyzed the RCT comparing mean outcomes of group A (high dose IgG) to group B (low dose IgG) at the usual trial end point (time 6 in this case). The other analyzed all 12 time points for the group B patients as six single-case experimental designs analyzed together in a Bayesian nonlinear framework. In the randomized trial, group A [M = 794.93; standard deviation (SD) = 90.48] had significantly higher serum IgG levels at time six than group B (M = 283.89; SD = 71.10) (t = 10.88; df = 10; P < 0.001), yielding a mean difference of MD = 511.05 [standard error (SE) = 46.98]. For the single-case experimental designs, the effect from an intrinsically nonlinear regression was also significant and comparable in size with overlapping confidence intervals: MD = 495.00, SE = 54.41, and t = 495.00/54.41 = 9.10. Subsequent exploratory analyses indicated that how trend was modeled made a difference to these conclusions. The results of single-case experimental designs accurately approximated results from an RCT, although more work is needed to understand the conditions under which this holds. Copyright © 2016 Elsevier Inc. All rights reserved.
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced
Koppelmans, Vincent; Erdeniz, Burak; De Dios, Yiri E; Wood, Scott J; Reuter-Lorenz, Patricia A; Kofman, Igor; Bloomberg, Jacob J; Mulavara, Ajitkumar P; Seidler, Rachael D
2013-12-18
Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight.
The effect of dropout on the efficiency of D-optimal designs of linear mixed models.
Ortega-Azurduy, S A; Tan, F E S; Berger, M P F
2008-06-30
Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.
Time as a dimension of the sample design in national-scale forest inventories
Francis Roesch; Paul Van Deusen
2013-01-01
Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...
The "Best Worst" Field Optimization and Focusing
NASA Technical Reports Server (NTRS)
Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark
2008-01-01
A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.
Low-loss reciprocal optical terminals for two-way time-frequency transfer.
Swann, W C; Sinclair, L C; Khader, I; Bergeron, H; Deschênes, J-D; Newbury, N R
2017-12-01
We present the design and performance of a low-cost, reciprocal, compact free-space terminal employing tip/tilt pointing compensation that enables optical two-way time-frequency transfer over free-space links across the turbulent atmosphere. The insertion loss of the terminals is ∼1.5 dB with total link losses of 15 dB, 24 dB, and 50 dB across horizontal, turbulent 2-km, 4-km, and 12-km links, respectively. The effects of turbulence on pointing control and aperture size, and their influence on the terminal design, are discussed.
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1984-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1983-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
On the improvement of blood sample collection at clinical laboratories
2014-01-01
Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140
Research of real-time communication software
NASA Astrophysics Data System (ADS)
Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong
2003-11-01
Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.
Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW
NASA Technical Reports Server (NTRS)
DeLoach, RIchard; Philipsen, Iwan
2007-01-01
This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.
Deciphering assumptions about stepped wedge designs: the case of Ebola vaccine research.
Doussau, Adélaïde; Grady, Christine
2016-12-01
Ethical concerns about randomising persons to a no-treatment arm in the context of Ebola epidemic led to consideration of alternative designs. The stepped wedge (SW) design, in which participants or clusters are randomised to receive an intervention at different time points, gained popularity. Common arguments in favour of using this design are (1) when an intervention is likely to do more good than harm, (2) all participants should receive the experimental intervention at some time point during the study and (3) the design might be preferable for practical reasons. We examine these assumptions when considering Ebola vaccine research. First, based on the claim that a stepped wedge design is indicated when it is likely that the intervention will do more good than harm, we reviewed published and ongoing SW trials to explore previous use of this design to test experimental drugs or vaccines, and found that SW design has never been used for trials of experimental drugs or vaccines. Given that Ebola vaccines were all experimental with no prior efficacy data, the use of a stepped wedge design would have been unprecedented. Second, we show that it is rarely true that all participants receive the intervention in SW studies, but rather, depending on certain design features, all clusters receive the intervention. Third, we explore whether the SW design is appealing for feasibility reasons and point out that there is significant complexity. In the setting of the Ebola epidemic, spatiotemporal variation may have posed problematic challenges to a stepped wedge design for vaccine research. Finally, we propose a set of points to consider for scientific reviewers and ethics committees regarding proposals for SW designs. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Trends in modern system theory
NASA Technical Reports Server (NTRS)
Athans, M.
1976-01-01
The topics considered are related to linear control system design, adaptive control, failure detection, control under failure, system reliability, and large-scale systems and decentralized control. It is pointed out that the design of a linear feedback control system which regulates a process about a desirable set point or steady-state condition in the presence of disturbances is a very important problem. The linearized dynamics of the process are used for design purposes. The typical linear-quadratic design involving the solution of the optimal control problem of a linear time-invariant system with respect to a quadratic performance criterion is considered along with gain reduction theorems and the multivariable phase margin theorem. The stumbling block in many adaptive design methodologies is associated with the amount of real time computation which is necessary. Attention is also given to the desperate need to develop good theories for large-scale systems, the beginning of a microprocessor revolution, the translation of the Wiener-Hopf theory into the time domain, and advances made in dynamic team theory, dynamic stochastic games, and finite memory stochastic control.
Modeling an enhanced ridesharing system with meet points and time windows
Li, Xin; Hu, Sangen; Deng, Kai
2018-01-01
With the rising of e-hailing services in urban areas, ride sharing is becoming a common mode of transportation. This paper presents a mathematical model to design an enhanced ridesharing system with meet points and users’ preferable time windows. The introduction of meet points allows ridesharing operators to trade off the benefits of saving en-route delays and the cost of additional walking for some passengers to be collectively picked up or dropped off. This extension to the traditional door-to-door ridesharing problem brings more operation flexibility in urban areas (where potential requests may be densely distributed in neighborhood), and thus could achieve better system performance in terms of reducing the total travel time and increasing the served passengers. We design and implement a Tabu-based meta-heuristic algorithm to solve the proposed mixed integer linear program (MILP). To evaluate the validation and effectiveness of the proposed model and solution algorithm, several scenarios are designed and also resolved to optimality by CPLEX. Results demonstrate that (i) detailed route plan associated with passenger assignment to meet points can be obtained with en-route delay savings; (ii) as compared to CPLEX, the meta-heuristic algorithm bears the advantage of higher computation efficiency and produces good quality solutions with 8%~15% difference from the global optima; and (iii) introducing meet points to ridesharing system saves the total travel time by 2.7%-3.8% for small-scale ridesharing systems. More benefits are expected for ridesharing systems with large size of fleet. This study provides a new tool to efficiently operate the ridesharing system, particularly when the ride sharing vehicles are in short supply during peak hours. Traffic congestion mitigation will also be expected. PMID:29715302
NASA Technical Reports Server (NTRS)
Fichtl, G. H.
1971-01-01
Statistical estimates of wind shear in the planetary boundary layer are important in the design of V/STOL aircraft, and for the design of the Space Shuttle. The data analyzed in this study consist of eleven sets of longitudinal turbulent velocity fluctuation time histories digitized at 0.2 sec intervals with approximately 18,000 data points per time history. The longitudinal velocity fluctuations were calculated with horizontal wind and direction data collected at the 18-, 30-, 60-, 90-, 120-, and 150-m levels. The data obtained confirm the result that Eulerian time spectra transformed to wave-number spectra with Taylor's frozen eddy hypothesis possess inertial-like behavior at wave-numbers well out of the inertial subrange.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Design of rocker switches for work-vehicles--an application of Kansei Engineering.
Schütte, Simon; Eklund, Jörgen
2005-09-01
Rocker switches used in vehicles meet high demands partly due to the increased focus on customer satisfaction. Previous studies focused on ergonomics and usability rather than design for emotions and affection. The aim of this study was to determine how and to what extent engineering properties influence the perception of rocker switches. Secondary aims were to compare two types of rating scales and to determine consistency over time of the ratings. As a method Kansei Engineering was used, describing a product domain from a physical and semantic point of view. A model was built and validated, and recommendations for new designs were given. It was seen that the subjective impressions of robustness, precision and design are strongly influenced by the zero position, the contact position, the form-ratio, shape and the surface of rocker switches. A 7-point scale was found suitable. The Kansei ratings were consistent over time.
Design of an advanced flight planning system
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1985-01-01
The demand for both fuel conservation and four-dimensional traffic management require that the preflight planning process be designed to account for advances in airborne flight management and weather forecasting. The steps and issues in designing such an advanced flight planning system are presented. Focus is placed on the different optimization options for generating the three-dimensional reference path. For the cruise phase, one can use predefined jet routes, direct routes based on a network of evenly spaced grid points, or a network where the grid points are existing navaid locations. Each choice has its own problem in determining an optimum solution. Finding the reference path is further complicated by choice of cruise altitude levels, use of a time-varying weather field, and requiring a fixed time-of-arrival (four-dimensional problem).
NASA Technical Reports Server (NTRS)
Hopkins, Randall C.; Capizzo, Peter; Fincher, Sharon; Hornsby, Linda S.; Jones, David
2010-01-01
The Advanced Concepts Office at Marshall Space Flight Center completed a brief spacecraft design study for the 8-meter monolithic Advanced Technology Large Aperture Space Telescope (ATLAST-8m). This spacecraft concept provides all power, communication, telemetry, avionics, guidance and control, and thermal control for the observatory, and inserts the observatory into a halo orbit about the second Sun-Earth Lagrange point. The multidisciplinary design team created a simple spacecraft design that enables component and science instrument servicing, employs articulating solar panels for help with momentum management, and provides precise pointing control while at the same time fast slewing for the observatory.
NASA Technical Reports Server (NTRS)
1972-01-01
The conceptual designs of four useful tilt-rotor aircraft for the 1975 to 1980 time period are presented. Parametric studies leading to design point selection are described, and the characteristics and capabilities of each configuration are presented. An assessment is made of current technology status, and additional tilt-rotor research programs are recommended to minimize the time, cost, and risk of development of these vehicles.
Portable Low-Volume Therapy for Severe Blood Loss
2013-06-01
with Tukey’s post hoc test were performed to find treatment differences within different time points for total hemoglobin (tHb), pH, pressure of...Tukey’s post hoc test were performed to find treatment differences within time points. No correlation was observed for any of the parameters at any...Department of the Army position, policy or decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB
Ballari, Rajashekhar V; Martin, Asha; Gowda, Lalitha R
2013-01-01
Brinjal is an important vegetable crop. Major crop loss of brinjal is due to insect attack. Insect-resistant EE-1 brinjal has been developed and is awaiting approval for commercial release. Consumer health concerns and implementation of international labelling legislation demand reliable analytical detection methods for genetically modified (GM) varieties. End-point and real-time polymerase chain reaction (PCR) methods were used to detect EE-1 brinjal. In end-point PCR, primer pairs specific to 35S CaMV promoter, NOS terminator and nptII gene common to other GM crops were used. Based on the revealed 3' transgene integration sequence, primers specific for the event EE-1 brinjal were designed. These primers were used for end-point single, multiplex and SYBR-based real-time PCR. End-point single PCR showed that the designed primers were highly specific to event EE-1 with a sensitivity of 20 pg of genomic DNA, corresponding to 20 copies of haploid EE-1 brinjal genomic DNA. The limits of detection and quantification for SYBR-based real-time PCR assay were 10 and 100 copies respectively. The prior development of detection methods for this important vegetable crop will facilitate compliance with any forthcoming labelling regulations. Copyright © 2012 Society of Chemical Industry.
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Active Learning: A PowerPoint Tutorial
ERIC Educational Resources Information Center
Gareis, Elisabeth
2007-01-01
Individual or group presentations are common assignments in business communication courses, and many students use PowerPoint slides as audiovisual support. Frequently, curriculum constraints don't allow instructors much time to teach effective design and delivery of presentation graphics in their courses; guidelines in the form of minilectures or…
D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality
NASA Astrophysics Data System (ADS)
Hwang, Jin-Tsong; Chu, Ting-Chen
2016-10-01
This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.
NASA Technical Reports Server (NTRS)
Chen, C. C.; Franklin, C. F.
1980-01-01
The frequency reuse capability is demonstrated for a Ku-band multiple beam antenna which provides contiguous low sidelobe spot beams for point-to-point communications between any two points within the continental United States (CONUS), or regional coverage beams for direct broadcast systems. A spot beam antenna in the 14/21 GHz band which provides contiguous overlapping beams covering CONUS and two discrete beams covering Hawaii and Alaska were designed, developed, and tested. Two reflector antennas are required for providing contiguous coverage of CONUS. Each is comprised of one offset parabolic reflector, one flat polarization diplexer, and two separate planar array feeds. This antenna system provides contiguous spot beam coverage of CONUS, utilizing 15 beams. Also designed, developed and demonstrated was a shaped contoured beam antenna system which provides contiguous four time zone coverage of CONUS from a single offset parabolic reflector incorporating one flat polarization diplexer and two separate planar array feeds. The beams which illuminate the eastern time zone and the mountain time zone are horizontally polarized, while the beams which illuminate the central time zone and the pacific time zone are vertically polarized. Frequency reuse is achieved by amplitude and polarization isolation.
The Galileo scan platform pointing control system - A modern control theoretic viewpoint
NASA Technical Reports Server (NTRS)
Sevaston, G. E.; Macala, G. A.; Man, G. K.
1985-01-01
The current Galileo scan platform pointing control system (SPPCS) is described, and ways in which modern control concepts could serve to enhance it are considered. Of particular interest are: the multi-variable design model and overall control system architecture, command input filtering, feedback compensator and command input design, stability robustness constraint for both continuous time control systems and for sampled data control systems, and digital implementation of the control system. The proposed approach leads to the design of a system that is similar to current Galileo SPPCS configuration, but promises to be more systematic.
2017-03-26
logistic constraints and associated travel time between points in the central and western Great Basin. The geographic and temporal breadth of our...surveys (MacKenzie and Royle 2005). In most cases, less time is spent traveling between sites on a given day when the single-day design is implemented...with the single-day design (110 hr). These estimates did not include return- travel time , which did not limit sampling effort. As a result, we could
Stieger, Stefan; Gumhalter, Nora; Tran, Ulrich S.; Voracek, Martin; Swami, Viren
2013-01-01
The present study utilized a repeated cross-sectional survey design to examine belief in conspiracy theories about the abduction of Natascha Kampusch. At two time points (October 2009 and October 2011), participants drawn from independent cross-sections of the Austrian population (Time Point 1, N = 281; Time Point 2, N = 277) completed a novel measure of belief in conspiracy theories concerning the abduction of Kampusch, as well as measures of general conspiracist ideation, self-esteem, paranormal and superstitious beliefs, cognitive ability, and media exposure to the Kampusch case. Results indicated that although belief in the Kampusch conspiracy theory declined between testing periods, the effect size of the difference was small. In addition, belief in the Kampusch conspiracy theory was significantly predicted by general conspiracist ideation at both time points. The need to conduct further longitudinal tests of conspiracist ideation is emphasized in conclusion. PMID:23745118
Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei
2010-07-01
In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.
Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang
2018-04-05
We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.
Options for Robust Airfoil Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Li, Wu
2002-01-01
A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.
The modeling and design of the Annular Suspension and Pointing System /ASPS/. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Lin, W. C. W.
1979-01-01
The Annular Suspension and Pointing System (ASPS) is a payload auxiliary pointing device of the Space Shuttle. The ASPS is comprised of two major subassemblies, a vernier and a coarse pointing subsystem. The three functions provided by the ASPS are related to the pointing of the payload, centering the payload in the magnetic actuator assembly, and tracking the payload mounting plate and shuttle motions by the coarse gimbals. The equations of motion of a simplified planar model of the ASPS are derived. Attention is given to a state diagram of the dynamics of the ASPS with position-plus-rate controller, the nonlinear spring characteristic for the wire-cable torque of the ASPS, the design of the analog ASPS through decoupling and pole placement, and the time response of different components of the continuous control system.
Using Newsletters to Improve Parents' Communication with Their Early Adolescents
ERIC Educational Resources Information Center
Dworkin, Jodi; Gonzalez, Chris; Gengler, Colleen; Olson, Kathleen
2011-01-01
Two sets of newsletters designed to improve parent-teen communication were distributed at two different time points to 71 parents of seventh and eighth graders across five states. At both points, parents completed an evaluation assessing parent-child communication, parenting practices, the emotional experience of parenting, other parent education…
NASA Technical Reports Server (NTRS)
Krebs, R. P.
1971-01-01
The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.
Design and Outcomes of a "Mothers In Motion" Behavioral Intervention Pilot Study
ERIC Educational Resources Information Center
Chang, Mei-Wei; Nitzke, Susan; Brown, Roger
2010-01-01
Objective: This paper describes the design and findings of a pilot "Mothers In Motion" (P-"MIM") program. Design: A randomized controlled trial that collected data via telephone interviews and finger stick at 3 time points: baseline and 2 and 8 months post-intervention. Setting: Three Special Supplemental Nutrition Program for…
System design of the annular suspension and pointing system /ASPS/
NASA Technical Reports Server (NTRS)
Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.
1978-01-01
This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.
Parallelization of Program to Optimize Simulated Trajectories (POST3D)
NASA Technical Reports Server (NTRS)
Hammond, Dana P.; Korte, John J. (Technical Monitor)
2001-01-01
This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.
Design of barrier bucket kicker control system
NASA Astrophysics Data System (ADS)
Ni, Fa-Fu; Wang, Yan-Yu; Yin, Jun; Zhou, De-Tai; Shen, Guo-Dong; Zheng, Yang-De.; Zhang, Jian-Chuan; Yin, Jia; Bai, Xiao; Ma, Xiao-Li
2018-05-01
The Heavy-Ion Research Facility in Lanzhou (HIRFL) contains two synchrotrons: the main cooler storage ring (CSRm) and the experimental cooler storage ring (CSRe). Beams are extracted from CSRm, and injected into CSRe. To apply the Barrier Bucket (BB) method on the CSRe beam accumulation, a new BB technology based kicker control system was designed and implemented. The controller of the system is implemented using an Advanced Reduced Instruction Set Computer (RISC) Machine (ARM) chip and a field-programmable gate array (FPGA) chip. Within the architecture, ARM is responsible for data presetting and floating number arithmetic processing. The FPGA computes the RF phase point of the two rings and offers more accurate control of the time delay. An online preliminary experiment on HIRFL was also designed to verify the functionalities of the control system. The result shows that the reference trigger point of two different sinusoidal RF signals for an arbitrary phase point was acquired with a matched phase error below 1° (approximately 2.1 ns), and the step delay time better than 2 ns were realized.
Gas-Dynamic Designing of the Exhaust System for the Air Brake
NASA Astrophysics Data System (ADS)
Novikova, Yu; Goriachkin, E.; Volkov, A.
2018-01-01
Each gas turbine engine is tested some times during the life-cycle. The test equipment includes the air brake that utilizes the power produced by the gas turbine engine. In actual conditions, the outlet pressure of the air brake does not change and is equal to atmospheric pressure. For this reason, for the air brake work it is necessary to design the special exhaust system. Mission of the exhaust system is to provide the required level of backpressure at the outlet of the air brake. The backpressure is required for the required power utilization by the air brake (the air brake operation in the required points on the performance curves). The paper is described the development of the gas dynamic canal, designing outlet guide vane and the creation of a unified exhaust system for the air brake. Using a unified exhaust system involves moving the operating point to the performance curve further away from the calculated point. However, the applying of one exhaust system instead of two will significantly reduce the cash and time costs.
Homemade Powerpoint Games: Game Design Pedagogy Aligned to the TPACK Framework
ERIC Educational Resources Information Center
Siko, Jason P.; Barbour, Michael K.
2012-01-01
While researchers are examining the role of playing games to learn, others are looking at using game design as an instructional tool. However, game-design software may require additional time to train both teachers and students. In this article, the authors discuss the use of Microsoft PowerPoint as a tool for game-design instruction and the…
2007-06-01
file ARC- 20060811T130816.txt, where color is used to represent points in time (red being the earliest, transitioning to orange, yellow , then white...Administration (NOAA), using passive hydrophone arrays along the mid-Atlantic ridge to listen for underwater earthquakes and volcanoes , have found that a...appeared. The earliest data points were designated red, and later points were shades of orange and yellow , until the last points (relative to the
Accessing the exceptional points of parity-time symmetric acoustics
Shi, Chengzhi; Dubois, Marc; Chen, Yun; Cheng, Lei; Ramezani, Hamidreza; Wang, Yuan; Zhang, Xiang
2016-01-01
Parity-time (PT) symmetric systems experience phase transition between PT exact and broken phases at exceptional point. These PT phase transitions contribute significantly to the design of single mode lasers, coherent perfect absorbers, isolators, and diodes. However, such exceptional points are extremely difficult to access in practice because of the dispersive behaviour of most loss and gain materials required in PT symmetric systems. Here we introduce a method to systematically tame these exceptional points and control PT phases. Our experimental demonstration hinges on an active acoustic element that realizes a complex-valued potential and simultaneously controls the multiple interference in the structure. The manipulation of exceptional points offers new routes to broaden applications for PT symmetric physics in acoustics, optics, microwaves and electronics, which are essential for sensing, communication and imaging. PMID:27025443
A Fixed Point VHDL Component Library for a High Efficiency Reconfigurable Radio Design Methodology
NASA Technical Reports Server (NTRS)
Hoy, Scott D.; Figueiredo, Marco A.
2006-01-01
Advances in Field Programmable Gate Array (FPGA) technologies enable the implementation of reconfigurable radio systems for both ground and space applications. The development of such systems challenges the current design paradigms and requires more robust design techniques to meet the increased system complexity. Among these techniques is the development of component libraries to reduce design cycle time and to improve design verification, consequently increasing the overall efficiency of the project development process while increasing design success rates and reducing engineering costs. This paper describes the reconfigurable radio component library developed at the Software Defined Radio Applications Research Center (SARC) at Goddard Space Flight Center (GSFC) Microwave and Communications Branch (Code 567). The library is a set of fixed-point VHDL components that link the Digital Signal Processing (DSP) simulation environment with the FPGA design tools. This provides a direct synthesis path based on the latest developments of the VHDL tools as proposed by the BEE VBDL 2004 which allows for the simulation and synthesis of fixed-point math operations while maintaining bit and cycle accuracy. The VHDL Fixed Point Reconfigurable Radio Component library does not require the use of the FPGA vendor specific automatic component generators and provide a generic path from high level DSP simulations implemented in Mathworks Simulink to any FPGA device. The access to the component synthesizable, source code provides full design verification capability:
Combined VSWIR/TIR Products Overview: Issues and Examples
NASA Technical Reports Server (NTRS)
Knox, Robert G.
2010-01-01
The presentation provides a summary of VSWIR data collected at 19-day intervals for most areas. TIR data was collected both day and night on a 5-day cycle (more frequently at higher latitudes), the TIR swath is four times as wide as VSWIR, and the 5-day orbit repeat is approximate. Topics include nested swath geometry for reference point design and coverage simulations for sample FLUXNET tower sites. Other points examined include variation in latitude for revisit frequency, overpass times, and TIR overlap geometry and timing between VSWIR data collections.
Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan
The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.
NASA Astrophysics Data System (ADS)
Jakoby, Bjoern W.; Bercier, Yanic; Watson, Charles C.; Bendriem, Bernard; Townsend, David W.
2009-06-01
A new combined lutetium oxyorthosilicate (LSO) PET/CT scanner with an extended axial field-of-view (FOV) of 21.8 cm has been developed (Biograph TruePoint PET/CT with TrueV; Siemens Molecular Imaging) and introduced into clinical practice. The scanner includes the recently announced point spread function (PSF) reconstruction algorithm. The PET components incorporate four rings of 48 detector blocks, 5.4 cm times 5.4 cm in cross-section. Each block comprises a 13 times 13 matrix of 4 times 4 times 20 mm3 elements. Data are acquired with a 4.5 ns coincidence time window and an energy window of 425-650 keV. The physical performance of the new scanner has been evaluated according to the recently revised National Electrical Manufacturers Association (NEMA) NU 2-2007 standard and the results have been compared with a previous PET/CT design that incorporates three rings of block detectors with an axial coverage of 16.2 cm (Biograph TruePoint PET/CT; Siemens Molecular Imaging). In addition to the phantom measurements, patient Noise Equivalent Count Rates (NECRs) have been estimated for a range of patients with different body weights (42-154 kg). The average spatial resolution is the same for both scanners: 4.4 mm (FWHM) and 5.0 mm (FWHM) at 1 cm and 10 cm respectively from the center of the transverse FOV. The scatter fractions of the Biograph TruePoint and Biograph TruePoint TrueV are comparable at 32%. Compared to the three ring design, the system sensitivity and peak NECR with smoothed randoms correction (1R) increase by 82% and 73%, respectively. The increase in sensitivity from the extended axial coverage of the Biograph TruePoint PET/CT with TrueV should allow a decrease in either scan time or injected dose without compromising diagnostic image quality. The contrast improvement with the PSF reconstruction potentially offers enhanced detectability for small lesions.
Information content of household-stratified epidemics.
Kinyanjui, T M; Pellis, L; House, T
2016-09-01
Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...
29 CFR 1926.1000 - Rollover protective structures (ROPS) for material handling equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... two times the weight of the prime mover applied at the point of impact. (i) The design objective shall..., if any; (3) Machine make, model, or series number that the structure is designed to fit. (f) Machines... performance criteria detailed in §§ 1926.1001 and 1926.1002, as applicable or shall be designed, fabricated...
29 CFR 1926.1000 - Rollover protective structures (ROPS) for material handling equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... two times the weight of the prime mover applied at the point of impact. (i) The design objective shall..., if any; (3) Machine make, model, or series number that the structure is designed to fit. (f) Machines... performance criteria detailed in §§ 1926.1001 and 1926.1002, as applicable or shall be designed, fabricated...
29 CFR 1926.1000 - Rollover protective structures (ROPS) for material handling equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... two times the weight of the prime mover applied at the point of impact. (i) The design objective shall..., if any; (3) Machine make, model, or series number that the structure is designed to fit. (f) Machines... performance criteria detailed in §§ 1926.1001 and 1926.1002, as applicable or shall be designed, fabricated...
29 CFR 1926.1000 - Rollover protective structures (ROPS) for material handling equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... two times the weight of the prime mover applied at the point of impact. (i) The design objective shall..., if any; (3) Machine make, model, or series number that the structure is designed to fit. (f) Machines... performance criteria detailed in §§ 1926.1001 and 1926.1002, as applicable or shall be designed, fabricated...
29 CFR 1926.1000 - Rollover protective structures (ROPS) for material handling equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... two times the weight of the prime mover applied at the point of impact. (i) The design objective shall..., if any; (3) Machine make, model, or series number that the structure is designed to fit. (f) Machines... performance criteria detailed in §§ 1926.1001 and 1926.1002, as applicable or shall be designed, fabricated...
System on a Chip Real-Time Emulation (SOCRE)
2006-09-01
code ) i Table of Contents Preface...emulation platform included LDPC decoders, A/V and radio applications Port BEE flow to Emulation Platforms, SOC Technologies One of the key tasks of the...Once the design has been described within Simulink, the designer runs the BEE design flow within Matlab using the bee_xps interface. At this point
Comparison of Design-Build to Design-Bid-Build as a Project Delivery Method
2001-12-01
by Researcher] 40 Time growth is generally coupled with cost growth, and this rule holds true when looking at the cost growth on the DB and DBB...and Tierno, M., “Points to Remember,” The Military Engineer, No. 609, pp. 25-26, January-February 2001. 4. Crammer , Mark, “Design-Build in the
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
1987-09-01
real - time operating system should be efficient from the real-time point...5,8]) system naming scheme. 3.2 Protecting Objects Real-time embedded systems usually neglect protection mechanisms. However, a real - time operating system cannot...allocation mechanism should adhere to application constraints. This strong relationship between a real - time operating system and the application
Quantitative structure-activity relationship models that stand the test of time.
Davis, Andrew M; Wood, David J
2013-04-01
The pharmaceutical industry is in a period of intense change. While this has many drivers, attrition through the development process continues to be an important pressure. The emerging definitions of "compound quality" that are based on retrospective analyses of developmental attrition have highlighted a new direction for medicinal chemistry and the paradigm of "quality at the point of design". The time has come for retrospective analyses to catalyze prospective action. Quality at the point of design places pressure on the quality of our predictive models. Empirical QSAR models when built with care provide true predictive control, but their accuracy and precision can be improved. Here we describe AstraZeneca's experience of automation in QSAR model building and validation, and how an informatics system can provide a step-change in predictive power to project design teams, if they choose to use it.
2013-01-01
Background Designs and analyses of clinical trials with a time-to-event outcome almost invariably rely on the hazard ratio to estimate the treatment effect and implicitly, therefore, on the proportional hazards assumption. However, the results of some recent trials indicate that there is no guarantee that the assumption will hold. Here, we describe the use of the restricted mean survival time as a possible alternative tool in the design and analysis of these trials. Methods The restricted mean is a measure of average survival from time 0 to a specified time point, and may be estimated as the area under the survival curve up to that point. We consider the design of such trials according to a wide range of possible survival distributions in the control and research arm(s). The distributions are conveniently defined as piecewise exponential distributions and can be specified through piecewise constant hazards and time-fixed or time-dependent hazard ratios. Such designs can embody proportional or non-proportional hazards of the treatment effect. Results We demonstrate the use of restricted mean survival time and a test of the difference in restricted means as an alternative measure of treatment effect. We support the approach through the results of simulation studies and in real examples from several cancer trials. We illustrate the required sample size under proportional and non-proportional hazards, also the significance level and power of the proposed test. Values are compared with those from the standard approach which utilizes the logrank test. Conclusions We conclude that the hazard ratio cannot be recommended as a general measure of the treatment effect in a randomized controlled trial, nor is it always appropriate when designing a trial. Restricted mean survival time may provide a practical way forward and deserves greater attention. PMID:24314264
NASA Technical Reports Server (NTRS)
Hepner, T. E.; Meyers, J. F. (Inventor)
1985-01-01
A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.
Influencing Factors of the Initiation Point in the Parachute-Bomb Dynamic Detonation System
NASA Astrophysics Data System (ADS)
Qizhong, Li; Ye, Wang; Zhongqi, Wang; Chunhua, Bai
2017-12-01
The parachute system has been widely applied in modern armament design, especially for the fuel-air explosives. Because detonation of fuel-air explosives occurs during flight, it is necessary to investigate the influences of the initiation point to ensure successful dynamic detonation. In fact, the initiating position exist the falling area in the fuels, due to the error of influencing factors. In this paper, the major influencing factors of initiation point were explored with airdrop and the regularity between initiation point area and factors were obtained. Based on the regularity, the volume equation of initiation point area was established to predict the range of initiation point in the fuel. The analysis results showed that the initiation point appeared area, scattered on account of the error of attitude angle, secondary initiation charge velocity, and delay time. The attitude angle was the major influencing factors on a horizontal axis. On the contrary, secondary initiation charge velocity and delay time were the major influencing factors on a horizontal axis. Overall, the geometries of initiation point area were sector coupled with the errors of the attitude angle, secondary initiation charge velocity, and delay time.
Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change
Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem
2003-01-01
Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase. PMID:14728379
Design of sewage treatment system by applying fuzzy adaptive PID controller
NASA Astrophysics Data System (ADS)
Jin, Liang-Ping; Li, Hong-Chan
2013-03-01
In the sewage treatment system, the dissolved oxygen concentration control, due to its nonlinear, time-varying, large time delay and uncertainty, is difficult to establish the exact mathematical model. While the conventional PID controller only works with good linear not far from its operating point, it is difficult to realize the system control when the operating point far off. In order to solve the above problems, the paper proposed a method which combine fuzzy control with PID methods and designed a fuzzy adaptive PID controller based on S7-300 PLC .It employs fuzzy inference method to achieve the online tuning for PID parameters. The control algorithm by simulation and practical application show that the system has stronger robustness and better adaptability.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis
2011-11-01
Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.
Performance of FORTRAN floating-point operations on the Flex/32 multicomputer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1987-01-01
A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.
Improvement of Automated POST Case Success Rate Using Support Vector Machines
NASA Technical Reports Server (NTRS)
Zwack, Matthew R.; Dees, Patrick D.
2017-01-01
During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only an 8% pass rate, tens or hundreds of thousands of reps may be needed to be confident that the best repetition is at least close to the global optima. However, typical design study time constraints require that fewer repetitions be attempted, sometimes resulting in seed points that have only a handful of successful completions. If a small number of successful repetitions are used to generate a seed point, the graph method may inherit some inaccuracies as it chains DOE cases from the non-global-optimal seed points. This creates inherent noise in the graph data, which can limit the accuracy of the resulting surrogate models. For this reason, the goal of this work is to improve the seed point generation method and ultimately the accuracy of the resulting POST surrogate model. The work focuses on increasing the case pass rate for seed point generation.
High gain antenna pointing on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Vanelli, C. Anthony; Ali, Khaled S.
2005-01-01
This paper describes the algorithm used to point the high gain antennae on NASA/JPL's Mars Exploration Rovers. The gimballed antennae must track the Earth as it moves across the Martian sky during communication sessions. The algorithm accounts for (1) gimbal range limitations, (2) obstructions both on the rover and in the surrounding environment, (3) kinematic singularities in the gimbal design, and (4) up to two joint-space solutions for a given pointing direction. The algorithm computes the intercept-times for each of the occlusions and chooses the jointspace solution that provides the longest track time before encountering an occlusion. Upon encountering an occlusion, the pointing algorithm automatically switches to the other joint-space solution if it is not also occluded. The algorithm has successfully provided flop-free pointing for both rovers throughout the mission.
Design of point-of-care (POC) microfluidic medical diagnostic devices
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.
Khosrow-Khavar, Farzad; Tavakolian, Kouhyar; Blaber, Andrew; Menon, Carlo
2016-10-12
The purpose of this research was to design a delineation algorithm that could detect specific fiducial points of the seismocardiogram (SCG) signal with or without using the electrocardiogram (ECG) R-wave as the reference point. The detected fiducial points were used to estimate cardiac time intervals. Due to complexity and sensitivity of the SCG signal, the algorithm was designed to robustly discard the low-quality cardiac cycles, which are the ones that contain unrecognizable fiducial points. The algorithm was trained on a dataset containing 48,318 manually annotated cardiac cycles. It was then applied to three test datasets: 65 young healthy individuals (dataset 1), 15 individuals above 44 years old (dataset 2), and 25 patients with previous heart conditions (dataset 3). The algorithm accomplished high prediction accuracy with the rootmean- square-error of less than 5 ms for all the test datasets. The algorithm overall mean detection rate per individual recordings (DRI) were 74, 68, and 42 percent for the three test datasets when concurrent ECG and SCG were used. For the standalone SCG case, the mean DRI was 32, 14 and 21 percent. When the proposed algorithm applied to concurrent ECG and SCG signals, the desired fiducial points of the SCG signal were successfully estimated with a high detection rate. For the standalone case, however, the algorithm achieved high prediction accuracy and detection rate for only the young individual dataset. The presented algorithm could be used for accurate and non-invasive estimation of cardiac time intervals.
Time-free transfers between libration-point orbits in the elliptic restricted problem
NASA Astrophysics Data System (ADS)
Howell, K. C.; Hiday-Johnston, L. A.
This work is part of a larger research effort directed toward the formulation of a strategy to design optimal time-free impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior LI libration point of the Sun-Earth/Moon barycenter system. Inferior transfers that move a spacecraft from a large halo orbit to a smaller halo orbit are considered here. Primer vector theory is applied to non-optimal impulsive trajectories in the elliptic restricted three-body problem in order to establish whether the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The addition of interior impulses is also considered. Results indicate that a substantial savings in fuel can be achieved by the allowance for coastal periods on the specified libration-point orbits. The resulting time-free inferior transfers are compared to time-free superior transfers between halo orbits of equal z-amplitude separation.
Time-free transfers between libration-point orbits in the elliptic restricted problem
NASA Astrophysics Data System (ADS)
Howell, K. C.; Hiday, L. A.
1992-08-01
This work is directed toward the formulation of a strategy to design optimal time-free impulsive transfers between 3D libration-point orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. Inferior transfers that move a spacecraft from a large halo orbit to a smaller halo orbit are considered here. Primer vector theory is applied to nonoptimal impulsive trajectories in the elliptic restricted three-body problem in order to establish whether the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The addition of interior impulses is also considered. Results indicate that a substantial savings in fuel can be achieved by the allowance for coastal periods on the specified libration-point orbits. The resulting time-free inferior transfers are compared to time-free superior transfers between halo orbits of equal z-amplitude separation.
Media processors using a new microsystem architecture designed for the Internet era
NASA Astrophysics Data System (ADS)
Wyland, David C.
1999-12-01
The demands of digital image processing, communications and multimedia applications are growing more rapidly than traditional design methods can fulfill them. Previously, only custom hardware designs could provide the performance required to meet the demands of these applications. However, hardware design has reached a crisis point. Hardware design can no longer deliver a product with the required performance and cost in a reasonable time for a reasonable risk. Software based designs running on conventional processors can deliver working designs in a reasonable time and with low risk but cannot meet the performance requirements. What is needed is a media processing approach that combines very high performance, a simple programming model, complete programmability, short time to market and scalability. The Universal Micro System (UMS) is a solution to these problems. The UMS is a completely programmable (including I/O) system on a chip that combines hardware performance with the fast time to market, low cost and low risk of software designs.
Simple tunnel diode circuit for accurate zero crossing timing
NASA Technical Reports Server (NTRS)
Metz, A. J.
1969-01-01
Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.
Pointing History Engine for the Spitzer Space Telescope
NASA Technical Reports Server (NTRS)
Bayard, David; Ahmed, Asif; Brugarolas, Paul
2007-01-01
The Pointing History Engine (PHE) is a computer program that provides mathematical transformations needed to reconstruct, from downlinked telemetry data, the attitude of the Spitzer Space Telescope (formerly known as the Space Infrared Telescope Facility) as a function of time. The PHE also serves as an example for development of similar pointing reconstruction software for future space telescopes. The transformations implemented in the PHE take account of the unique geometry of the Spitzer telescope-pointing chain, including all data on relative alignments of components, and all information available from attitude-determination instruments. The PHE makes it possible to coordinate attitude data with observational data acquired at the same time, so that any observed astronomical object can be located for future reference and re-observation. The PHE is implemented as a subroutine used in conjunction with telemetry-formatting services of the Mission Image Processing Laboratory of NASA s Jet Propulsion Laboratory to generate the Boresight Pointing History File (BPHF). The BPHF is an archival database designed to serve as Spitzer s primary astronomical reference documenting where the telescope was pointed at any time during its mission.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., designed to furnish the power to pull, carry, propel, or drive implements that are designed for agriculture... point of the hood does not exceed 60 inches, and (4) The tractor is designed so that the operator.... The seat mounting shall be capable of withstanding this load plus a load equal to four times the...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., designed to furnish the power to pull, carry, propel, or drive implements that are designed for agriculture... point of the hood does not exceed 60 inches, and (4) The tractor is designed so that the operator.... The seat mounting shall be capable of withstanding this load plus a load equal to four times the...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., designed to furnish the power to pull, carry, propel, or drive implements that are designed for agriculture... point of the hood does not exceed 60 inches, and (4) The tractor is designed so that the operator.... The seat mounting shall be capable of withstanding this load plus a load equal to four times the...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., designed to furnish the power to pull, carry, propel, or drive implements that are designed for agriculture... point of the hood does not exceed 60 inches, and (4) The tractor is designed so that the operator.... The seat mounting shall be capable of withstanding this load plus a load equal to four times the...
Planned Missing Designs to Optimize the Efficiency of Latent Growth Parameter Estimates
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Jia, Fan; Wu, Wei; Little, Todd D.
2014-01-01
We examine the performance of planned missing (PM) designs for correlated latent growth curve models. Using simulated data from a model where latent growth curves are fitted to two constructs over five time points, we apply three kinds of planned missingness. The first is item-level planned missingness using a three-form design at each wave such…
ERIC Educational Resources Information Center
Ballmer, Noelle C.
2017-01-01
As the push towards lowering attrition of university students intensifies, particularly for first-time-in-college freshmen, administrators and campus leaders are increasingly designing and implementing co-curricular programs to support this population in order to positively impact student outcomes, namely, the grade point average, student…
Code of Federal Regulations, 2013 CFR
2013-07-01
... features permanently integrated into the design of the unit. Emission point means an individual tank... system is not a drain and collection system that is designed and operated for the sole purpose of..., or transfer system used to manage off-site material. Off-site material service means any time when a...
Code of Federal Regulations, 2011 CFR
2011-07-01
... features permanently integrated into the design of the unit. Emission point means an individual tank... system is not a drain and collection system that is designed and operated for the sole purpose of..., or transfer system used to manage off-site material. Off-site material service means any time when a...
NASA Astrophysics Data System (ADS)
Morrison, R. E.; Robinson, S. H.
A continuous wave Doppler radar system has been designed which is portable, easily deployed, and remotely controlled. The heart of this system is a DSP/control board using Analog Devices ADSP-21020 40-bit floating point digital signal processor (DSP) microprocessor. Two 18-bit audio A/D converters provide digital input to the DSP/controller board for near real time target detection. Program memory for the DSP is dual ported with an Intel 87C51 microcontroller allowing DSP code to be up-loaded or down-loaded from a central controlling computer. The 87C51 provides overall system control for the remote radar and includes a time-of-day/day-of-year real time clock, system identification (ID) switches, and input/output (I/O) expansion by an Intel 82C55 I/O expander.
Wonder, Amy Hagedorn; York, Jacki; Jackson, Kathryn L; Sluys, Teresa D
2017-10-01
The aim of this study was to examine the loss of Magnet® designation and how RNs' work engagement changed at 1 community hospital. The importance of RN work engagement to promote quality and safety is widely recognized in healthcare. Ongoing consistent research is critical to determine what organizational structures are needed to support RN work engagement. This was a comparative, descriptive, correlational study of RN cohorts at 2 time points: time 1 (T1), in 2011 during Magnet designation (n = 119), and time 2 (T2), in 2016, approximately 2 years after the loss of Magnet designation (n = 140). The cohort of RNs at T2 reported significantly lower work engagement in the time period after the loss of Magnet designation when compared with the RN cohort at T1 during Magnet designation (P ≤ .0002). These results provide insights for clinical leaders striving to support a culture of RN work engagement and quality care.
22 CFR 202.2 - Shipments eligible for reimbursement of freight charges.
Code of Federal Regulations, 2011 CFR
2011-04-01
... time can be effected by the utilization of points of entry other than ports. (b) Shipments shall be... development, relief, and rehabilitation in nations or areas designated by the Administrator of AID from time to time, agencies may be reimbursed by AID within specified limitations for freight charges incurred...
Roush, W B; Boykin, D; Branton, S L
2004-08-01
A mixture experiment, a variant of response surface methodology, was designed to determine the proportion of time to feed broiler starter (23% protein), grower (20% protein), and finisher (18% protein) diets to optimize production and processing variables based on a total production time of 48 d. Mixture designs are useful for proportion problems where the components of the experiment (i.e., length of time the diets were fed) add up to a unity (48 d). The experiment was conducted with day-old male Ross x Ross broiler chicks. The birds were placed 50 birds per pen in each of 60 pens. The experimental design was a 10-point augmented simplex-centroid (ASC) design with 6 replicates of each point. Each design point represented the portion(s) of the 48 d that each of the diets was fed. Formulation of the diets was based on NRC standards. At 49 d, each pen of birds was evaluated for production data including BW, feed conversion, and cost of feed consumed. Then, 6 birds were randomly selected from each pen for processing data. Processing variables included live weight, hot carcass weight, dressing percentage, fat pad percentage, and breast yield (pectoralis major and pectoralis minor weights). Production and processing data were fit to simplex regression models. Model terms determined not to be significant (P > 0.05) were removed. The models were found to be statistically adequate for analysis of the response surfaces. A compromise solution was calculated based on optimal constraints designated for the production and processing data. The results indicated that broilers fed a starter and finisher diet for 30 and 18 d, respectively, would meet the production and processing constraints. Trace plots showed that the production and processing variables were not very sensitive to the grower diet.
SPACEBAR: Kinematic design by computer graphics
NASA Technical Reports Server (NTRS)
Ricci, R. J.
1975-01-01
The interactive graphics computer program SPACEBAR, conceived to reduce the time and complexity associated with the development of kinematic mechanisms on the design board, was described. This program allows the direct design and analysis of mechanisms right at the terminal screen. All input variables, including linkage geometry, stiffness, and applied loading conditions, can be fed into or changed at the terminal and may be displayed in three dimensions. All mechanism configurations can be cycled through their range of travel and viewed in their various geometric positions. Output data includes geometric positioning in orthogonal coordinates of each node point in the mechanism, velocity and acceleration of the node points, and internal loads and displacements of the node points and linkages. All analysis calculations take at most a few seconds to complete. Output data can be viewed at the scope and also printed at the discretion of the user.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-09-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-01-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566
Alternatives for jet engine control
NASA Technical Reports Server (NTRS)
Leake, R. J.; Sain, M. K.
1978-01-01
General goals of the research were classified into two categories. The first category involves the use of modern multivariable frequency domain methods for control of engine models in the neighborhood of a quiescent point. The second category involves the use of nonlinear modelling and optimization techniques for control of engine models over a more extensive part of the flight envelope. In the frequency domain category, works were published in the areas of low-interaction design, polynomial design, and multiple setpoint studies. A number of these ideas progressed to the point at which they are starting to attract practical interest. In the nonlinear category, advances were made both in engine modelling and in the details associated with software for determination of time optimal controls. Nonlinear models for a two spool turbofan engine were expanded and refined; and a promising new approach to automatic model generation was placed under study. A two time scale scheme was developed to do two-dimensional dynamic programming, and an outward spiral sweep technique has greatly speeded convergence times in time optimal calculations.
Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey
1995-01-01
An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.
Overview of timing/synchronization for digital communications
NASA Technical Reports Server (NTRS)
Stover, H. A.
1978-01-01
Systems in general, and switched systems in particular, are explained. It pointed out some of the criteria that greatly influence timing/synchronization subsystem design for a military communications network but have little or no significance for civil systems. Timing techniques were evaluated in terms of fundamental features. Different combinations of these features covered most possibilities from which a synchronous timing system could be chosen.
Brain MRI volumetry in a single patient with mild traumatic brain injury.
Ross, David E; Castelvecchi, Cody; Ochs, Alfred L
2013-01-01
This letter to the editor describes the case of a 42 year old man with mild traumatic brain injury and multiple neuropsychiatric symptoms which persisted for a few years after the injury. Initial CT scans and MRI scans of the brain showed no signs of atrophy. Brain volume was measured using NeuroQuant®, an FDA-approved, commercially available software method. Volumetric cross-sectional (one point in time) analysis also showed no atrophy. However, volumetric longitudinal (two points in time) analysis showed progressive atrophy in several brain regions. This case illustrated in a single patient the principle discovered in multiple previous group studies, namely that the longitudinal design is more powerful than the cross-sectional design for finding atrophy in patients with traumatic brain injury.
Centrifugal and Axial Pump Design and Off-Design Performance Prediction
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1995-01-01
A meanline pump-flow modeling method has been developed to provide a fast capability for modeling pumps of cryogenic rocket engines. Based on this method, a meanline pump-flow code PUMPA was written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The design-point rotor efficiency and slip factors are obtained from empirical correlations to rotor-specific speed and geometry. The pump code can model axial, inducer, mixed-flow, and centrifugal pumps and can model multistage pumps in series. The rapid input setup and computer run time for this meanline pump flow code make it an effective analysis and conceptual design tool. The map-generation capabilities of the code provide the information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of PUMPA permit the user to do parametric design space exploration of candidate pump configurations and to provide head-flow maps for engine system evaluation.
Development of a preprototype times wastewater recovery subsystem, addendum
NASA Technical Reports Server (NTRS)
Dehner, G. F.
1984-01-01
Six tasks are described reflecting subsystem hardware and software modifications and test evaluation of a TIMES wastewater recovery subsystem. The overall results are illustrated in a figure which shows the water production rate, the specific energy corrected to 26.5 VDC, and the product water conductivity at various points in the testing. Four tasks are described reflecting studies performed to develop a preliminary design concept for a next generation TIMES. The overall results of the study are the completion of major design analyses and preliminary configuration layout drawings.
Perfectionism, Procrastination, and Psychological Distress
ERIC Educational Resources Information Center
Rice, Kenneth G.; Richardson, Clarissa M. E.; Clark, Dustin
2012-01-01
Using a cross-panel design and data from 2 successive cohorts of college students ( N = 357), we examined the stability of maladaptive perfectionism, procrastination, and psychological distress across 3 time points within a college semester. Each construct was substantially stable over time, with procrastination being especially stable. We also…
Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei
2015-12-01
The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Burkholder, Gary J.; Harlow, Lisa L.
2003-01-01
Tested a model of HIV behavior risk, using a fully cross-lagged, longitudinal design to illustrate the analysis of larger structural equation models. Data from 527 women who completed a survey at three time points show excellent fit of the model to the data. (SLD)
40 CFR 91.1203 - Emission warranty, warranty period.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subsequent purchaser, that the engine is designed, built, and equipped so as to conform at the time of sale... owner's choosing, such items as spark plugs, points, condensers, and any other part, item, or device related to emission control (but not designed for emission control) under the terms of the last sentence...
Liu, Tianqi; Wang, Jing; Liao, Yipeng; Wang, Xin; Wang, Shanshan
2018-04-30
An all-fiber Mach-Zehnder interferometer (MZI) for two quasi-continuous points' temperature sensing in seawater is proposed. Based on the beam propagation theory, transmission spectrum is designed to present two sets of clear and independent interferences. Following this design, MZI is fabricated and two points' temperature sensing in seawater are demonstrated with sensitivities of 42.69pm/°C and 39.17pm/°C, respectively. By further optimization, sensitivity of 80.91pm/°C can be obtained, which is 3-10 times higher than fiber Bragg gratings and microfiber resonator, and higher than almost all similar MZI based temperature sensors. In addition, factors affecting sensitivities are also discussed and verified in experiment. The two points' temperature sensing demonstrated here show advantages of simple and compact construction, robust structure, easy fabrication, high sensitivity, immunity to salinity and tunable distance of 1-20 centimeters between two points, which may provide references for macroscopic oceanic research and other sensing applications based on MZIs.
An Adaptive Cross-Architecture Combination Method for Graph Traversal
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
Field test of classical symmetric encryption with continuous variables quantum key distribution.
Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe
2012-06-18
We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.
Impact of Hypnosis Intervention in Alleviating Psychological and Physical Symptoms During Pregnancy.
Beevi, Zuhrah; Low, Wah Yun; Hassan, Jamiyah
2016-04-01
Physical symptoms (e.g., vomiting) and psychological symptoms (stress, anxiety, and depression) during pregnancy are common. Various strategies such as hypnosis are available to reduce these symptoms. The objective of the authors in this study is to investigate the impact of a hypnosis intervention in reducing physical and psychological symptoms during pregnancy. A pre-test/post-test quasi-experimental design was employed in this study. The hypnosis intervention was given to the experimental group participants at weeks 16 (baseline), 20 (time point 1), 28 (time point 2), and 36 (time point 3) of their pregnancy. Participants in the control group received only the traditional antenatal care. Participants from both groups completed the Depression Anxiety Stress Scale-21 (DASS-21) and a Pregnancy Symptoms Checklist at weeks 16, 20, 28 and 36 of pregnancy. Results indicated that stress and anxiety symptoms were significantly reduced for the experimental group, but not for the control group. Although mean differences for the depressive symptoms were not significant, the experimental group had lower symptoms at time point 3. The physical symptoms' results showed significant group differences at time point 3, indicating a reduction in the experience of physical symptoms for the experimental group participants. Our study showed that hypnosis intervention during pregnancy aided in reducing physical and psychological symptoms during pregnancy.
77 FR 15813 - Preoperational Testing of Instrument and Control Air Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-16
... seismic requirement, ICAS air-dryer testing to meet dew point design requirements, ICAS accumulator check... ensure consideration only for comments received on or before this date. Although a time limit is given... improvements in all published guides are encouraged at any time. ADDRESSES: You may access information and...
40 CFR 797.1300 - Daphnid acute toxicity test.
Code of Federal Regulations, 2010 CFR
2010-07-01
... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...
40 CFR 797.1300 - Daphnid acute toxicity test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...
40 CFR 797.1300 - Daphnid acute toxicity test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...
40 CFR 797.1300 - Daphnid acute toxicity test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...
40 CFR 797.1300 - Daphnid acute toxicity test.
Code of Federal Regulations, 2011 CFR
2011-07-01
... continuous exposure over a specified period of time. In this guideline, the effect measured is immobilization... at a point in time, or passing through the test chamber during a specific interval. (7) Static system..., daphnids which have been cultured and acclimated in accordance with the test design are randomly placed...
Turbopump Performance Improved by Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2002-01-01
The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
Ng, Stella L; Bartlett, Doreen J; Lucy, S Deborah
2013-05-01
Discussions about professional behaviors are growing increasingly prevalent across health professions, especially as a central component to education programs. A strong critical thinking disposition, paired with critical consciousness, may provide future health professionals with a foundation for solving challenging practice problems through the application of sound technical skill and scientific knowledge without sacrificing sensitive, empathic, client-centered practice. In this article, we describe an approach to monitoring student development of critical thinking dispositions and key professional behaviors as a way to inform faculty members' and clinical supervisors' support of students and ongoing curriculum development. We designed this exploratory study to describe the trajectory of change for a cohort of audiology students' critical thinking dispositions (measured by the California Critical Thinking Disposition Inventory: [CCTDI]) and professional behaviors (using the Comprehensive Professional Behaviors Development Log-Audiology [CPBDL-A]) in an audiology program. Implications for the CCTDI and CPBDL-A in audiology entry-to-practice curricula and professional development will be discussed. This exploratory study involved a cohort of audiology students, studied over a two-year period, using a one-group repeated measures design. Eighteen audiology students (two male and 16 female), began the study. At the third and final data collection point, 15 students completed the CCTDI, and nine students completed the CPBDL-A. The CCTDI and CPBDL-A were each completed at three time points: at the beginning, at the middle, and near the end of the audiology education program. Data are presented descriptively in box plots to examine the trends of development for each critical thinking disposition dimension and each key professional behavior as well as for an overall critical thinking disposition score. For the CCTDI, there was a general downward trend from time point 1 to time point 2 and a general upward trend from time point 2 to time point 3. Students demonstrated upward trends from the initial to final time point for their self-assessed development of professional behaviors as indicated on the CPBDL-A. The CCTDI and CPBDL-A can be used by audiology education programs as mechanisms for inspiring, fostering, and monitoring the development of critical thinking dispositions and key professional behaviors in students. Feedback and mentoring about dispositions and behaviors in conjunction with completion of these measures is recommended for inspiring and fostering these key professional attributes. American Academy of Audiology.
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Digital control system for space structure dampers
NASA Technical Reports Server (NTRS)
Haviland, J. K.
1985-01-01
A digital controller was developed using an SKD-51 System Design Kit, which incorporates an 8031 microcontroller. The necessary interfaces were installed in the wire wrap area of the SKD-51 and a pulse width modulator was developed to drive the coil of the actuator. Also, control equations were developed, using floating-point arithmetic. The design of the digital control system is emphasized, and it is shown that, provided certain rules are followed, an adequate design can be achieved. It is recommended that the so-called w-plane design method be used, and that the time elapsed before output of the up-dated coil-force signal be kept as small as possible. However, the cycle time for the controller should be watched carefully, because very small values for this time can lead to digital noise.
NASA Astrophysics Data System (ADS)
Ueno, Tetsuro; Hino, Hideitsu; Hashimoto, Ai; Takeichi, Yasuo; Sawada, Masahiro; Ono, Kanta
2018-01-01
Spectroscopy is a widely used experimental technique, and enhancing its efficiency can have a strong impact on materials research. We propose an adaptive design for spectroscopy experiments that uses a machine learning technique to improve efficiency. We examined X-ray magnetic circular dichroism (XMCD) spectroscopy for the applicability of a machine learning technique to spectroscopy. An XMCD spectrum was predicted by Gaussian process modelling with learning of an experimental spectrum using a limited number of observed data points. Adaptive sampling of data points with maximum variance of the predicted spectrum successfully reduced the total data points for the evaluation of magnetic moments while providing the required accuracy. The present method reduces the time and cost for XMCD spectroscopy and has potential applicability to various spectroscopies.
[Development of the automatic dental X-ray film processor].
Bai, J; Chen, H
1999-07-01
This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
A fast process development flow by applying design technology co-optimization
NASA Astrophysics Data System (ADS)
Chen, Yi-Chieh; Yeh, Shin-Shing; Ou, Tsong-Hua; Lin, Hung-Yu; Mai, Yung-Ching; Lin, Lawrence; Lai, Jun-Cheng; Lai, Ya Chieh; Xu, Wei; Hurat, Philippe
2017-03-01
Beyond 40 nm technology node, the pattern weak points and hotspot types increase dramatically. The typical patterns for lithography verification suffers huge turn-around-time (TAT) to handle the design complexity. Therefore, in order to speed up process development and increase pattern variety, accurate design guideline and realistic design combinations are required. This paper presented a flow for creating a cell-based layout, a lite realistic design, to early identify problematic patterns which will negatively affect the yield. A new random layout generating method, Design Technology Co-Optimization Pattern Generator (DTCO-PG), is reported in this paper to create cell-based design. DTCO-PG also includes how to characterize the randomness and fuzziness, so that it is able to build up the machine learning scheme which model could be trained by previous results, and then it generates patterns never seen in a lite design. This methodology not only increases pattern diversity but also finds out potential hotspot preliminarily. This paper also demonstrates an integrated flow from DTCO pattern generation to layout modification. Optical Proximity Correction, OPC and lithographic simulation is then applied to DTCO-PG design database to detect hotspots and then hotspots or weak points can be automatically fixed through the procedure or handled manually. This flow benefits the process evolution to have a faster development cycle time, more complexity pattern design, higher probability to find out potential hotspots in early stage, and a more holistic yield ramping operation.
It's time to reinvent the general aviation airplane
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1988-01-01
Current designs for general aviation airplanes have become obsolete, and avenues for major redesign must be considered. New designs should incorporate recent advances in electronics, aerodynamics, structures, materials, and propulsion. Future airplanes should be optimized to operate satisfactorily in a positive air traffic control environment, to afford safety and comfort for point-to-point transportation, and to take advantage of automated manufacturing techniques and high production rates. These requirements have broad implications for airplane design and flying qualities, leading to a concept for the Modern Equipment General Aviation (MEGA) airplane. Synergistic improvements in design, production, and operation can provide a much needed fresh start for the general aviation industry and the traveling public. In this investigation a small four place airplane is taken as the reference, although the proposed philosophy applies across the entire spectrum of general aviation.
Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data
NASA Astrophysics Data System (ADS)
Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas
2016-06-01
Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.
Optical Interface States Protected by Synthetic Weyl Points
NASA Astrophysics Data System (ADS)
Wang, Qiang; Xiao, Meng; Liu, Hui; Zhu, Shining; Chan, C. T.
2017-07-01
Weyl fermions have not been found in nature as elementary particles, but they emerge as nodal points in the band structure of electronic and classical wave crystals. Novel phenomena such as Fermi arcs and chiral anomaly have fueled the interest in these topological points which are frequently perceived as monopoles in momentum space. Here, we report the experimental observation of generalized optical Weyl points inside the parameter space of a photonic crystal with a specially designed four-layer unit cell. The reflection at the surface of a truncated photonic crystal exhibits phase vortexes due to the synthetic Weyl points, which in turn guarantees the existence of interface states between photonic crystals and any reflecting substrates. The reflection phase vortexes have been confirmed for the first time in our experiments, which serve as an experimental signature of the generalized Weyl points. The existence of these interface states is protected by the topological properties of the Weyl points, and the trajectories of these states in the parameter space resembles those of Weyl semimetal "Fermi arc surface states" in momentum space. Tracing the origin of interface states to the topological character of the parameter space paves the way for a rational design of strongly localized states with enhanced local field.
van Det, M J; Meijerink, W J H J; Hoff, C; Middel, B; Pierie, J P E N
2013-08-01
INtraoperative Video Enhanced Surgical procedure Training (INVEST) is a new training method designed to improve the transition from basic skills training in a skills lab to procedural training in the operating theater. Traditionally, the master-apprentice model (MAM) is used for procedural training in the operating theater, but this model lacks uniformity and efficiency at the beginning of the learning curve. This study was designed to investigate the effectiveness and efficiency of INVEST compared to MAM. Ten surgical residents with no laparoscopic experience were recruited for a laparoscopic cholecystectomy training curriculum either by the MAM or with INVEST. After a uniform course in basic laparoscopic skills, each trainee performed six cholecystectomies that were digitally recorded. For 14 steps of the procedure, an observer who was blinded for the type of training determined whether the step was performed entirely by the trainee (2 points), partially by the trainee (1 point), or by the supervisor (0 points). Time measurements revealed the total procedure time and the amount of effective procedure time during which the trainee acted as the operating surgeon. Results were compared between both groups. Trainees in the INVEST group were awarded statistically significant more points (115.8 vs. 70.2; p < 0.001) and performed more steps without the interference of the supervisor (46.6 vs. 18.8; p < 0.001). Total procedure time was not lengthened by INVEST, and the part performed by trainees was significantly larger (69.9 vs. 54.1 %; p = 0.004). INVEST enhances effectiveness and training efficiency for procedural training inside the operating theater without compromising operating theater time efficiency.
Knopman, Debra S.; Voss, Clifford I.
1987-01-01
The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.
Student Mobility, Dosage, and Principal Stratification in School-Based RCTs
ERIC Educational Resources Information Center
Schochet, Peter Z.
2013-01-01
In school-based randomized control trials (RCTs), a common design is to follow student cohorts over time. For such designs, education researchers usually focus on the place-based (PB) impact parameter, which is estimated using data collected on all students enrolled in the study schools at each data collection point. A potential problem with this…
Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-01-01
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813
Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-06-24
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.
Advanced X-Ray Timing Array Mission: Conceptual Spacecraft Design Study
NASA Technical Reports Server (NTRS)
Hopkins, R. C.; Johnson, L.; Thomas, H. D.; Wilson-Hodge, C. A.; Baysinger, M.; Maples, C. D.; Fabisinski, L.L.; Hornsby, L.; Thompson, K. S.; Miernik, J. H.
2011-01-01
The Advanced X-Ray Timing Array (AXTAR) is a mission concept for submillisecond timing of bright galactic x-ray sources. The two science instruments are the Large Area Timing Array (LATA) (a collimated instrument with 2-50-keV coverage and over 3 square meters of effective area) and a Sky Monitor (SM), which acts as a trigger for pointed observations of x-ray transients. The spacecraft conceptual design team developed two spacecraft concepts that will enable the AXTAR mission: A minimal configuration to be launched on a Taurus II and a larger configuration to be launched on a Falcon 9 or similar vehicle.
Design of automation tools for management of descent traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1988-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.
A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.
1991-01-01
A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.
Interpreting the handling qualities of aircraft with stability and control augmentation
NASA Technical Reports Server (NTRS)
Hodgkinson, J.; Potsdam, E. H.; Smith, R. E.
1990-01-01
The general process of designing an aircraft for good flying qualities is first discussed. Lessons learned are pointed out, with piloted evaluation emerging as a crucial element. Two sources of rating variability in performing these evaluations are then discussed. First, the finite endpoints of the Cooper-Harper scale do not bias parametric statistical analyses unduly. Second, the wording of the scale does introduce some scatter. Phase lags generated by augmentation systems, as represented by equivalent time delays, often cause poor flying qualities. An analysis is introduced which allows a designer to relate any level of time delay to a probability of loss of aircraft control. This view of time delays should, it is hoped, allow better visibility of the time delays in the design process.
NASA Technical Reports Server (NTRS)
Lin, Jiguan Gene
1987-01-01
The quick suppression of the structural vibrations excited by bang-bang (BB) type time-optional slew maneuvers via modal-dashpot design of velocity output feedback control was investigated. Simulation studies were conducted, and modal dashpots were designed for the SCOLE flexible body dynamics. A two-stage approach was proposed for rapid slewing and precision pointing/retargeting of large, flexible space systems: (1) slew the whole system like a rigid body in a minimum time under specified limits on the control moments and forces, and (2) damp out the excited structural vibrations afterwards. This approach was found promising. High-power modal/dashpots can suppress very large vibrations, and can add a desirable amount of active damping to modeled modes. Unmodeled modes can also receive some concomitant active damping, as a benefit of spillover. Results also show that not all BB type rapid pointing maneuvers will excite large structural vibrations. When properly selected small forces (e.g., vernier thrusters) are used to complete the specified slew maneuver in the shortest time, even BB-type maneuvers will excite only small vibrations (e.g., 0.3 ft peak deflection for a 130 ft beam).
NASA Technical Reports Server (NTRS)
Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff
2016-01-01
The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Single axis control of ball position in magnetic levitation system using fuzzy logic control
NASA Astrophysics Data System (ADS)
Sahoo, Narayan; Tripathy, Ashis; Sharma, Priyaranjan
2018-03-01
This paper presents the design and real time implementation of Fuzzy logic control(FLC) for the control of the position of a ferromagnetic ball by manipulating the current flowing in an electromagnet that changes the magnetic field acting on the ball. This system is highly nonlinear and open loop unstable. Many un-measurable disturbances are also acting on the system, making the control of it highly complex but interesting for any researcher in control system domain. First the system is modelled using the fundamental laws, which gives a nonlinear equation. The nonlinear model is then linearized at an operating point. Fuzzy logic controller is designed after studying the system in closed loop under PID control action. The controller is then implemented in real time using Simulink real time environment. The controller is tuned manually to get a stable and robust performance. The set point tracking performance of FLC and PID controllers were compared and analyzed.
Design criteria for a self-actuated shutdown system to ensure limitation of core damage. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deane, N.A.; Atcheson, D.B.
1981-09-01
Safety-based functional requirements and design criteria for a self-actuated shutdown system (SASS) are derived in accordance with LOA-2 success criteria and reliability goals. The design basis transients have been defined and evaluated for the CDS Phase II design, which is a 2550 MWt mixed oxide heterogeneous core reactor. A partial set of reactor responses for selected transients is provided as a function of SASS characteristics such as reactivity worth, trip points, and insertion times.
NASA Astrophysics Data System (ADS)
Zbiciak, M.; Grabowik, C.; Janik, W.
2015-11-01
Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.
2013-01-01
Background Mobile technology offers the potential to deliver health-related interventions to individuals who would not otherwise present for in-person treatment. Text messaging (short message service, SMS), being the most ubiquitous form of mobile communication, is a promising method for reaching the most individuals. Objective The goal of the present study was to evaluate the feasibility and preliminary efficacy of a smoking cessation intervention program delivered through text messaging. Methods Adult participants (N=60, age range 18-52 years) took part in a single individual smoking cessation counseling session, and were then randomly assigned to receive either daily non-smoking related text messages (control condition) or the TXT-2-Quit (TXT) intervention. TXT consisted of automated smoking cessation messages tailored to individual’s stage of smoking cessation, specialized messages provided on-demand based on user requests for additional support, and a peer-to-peer social support network. Generalized estimating equation analysis was used to assess the primary outcome (7-day point-prevalence abstinence) using a 2 (treatment groups)×3 (time points) repeated measures design across three time points: 8 weeks, 3 months, and 6 months. Results Smoking cessation results showed an overall significant group difference in 7-day point prevalence abstinence across all follow-up time points. Individuals given the TXT intervention, with higher odds of 7-day point prevalence abstinence for the TXT group compared to the Mojo group (OR=4.52, 95% CI=1.24, 16.53). However, individual comparisons at each time point did not show significant between-group differences, likely due to reduced statistical power. Intervention feasibility was greatly improved by switching from traditional face-to-face recruitment methods (4.7% yield) to an online/remote strategy (41.7% yield). Conclusions Although this study was designed to develop and provide initial testing of the TXT-2-Quit system, these initial findings provide promising evidence that a text-based intervention can be successfully implemented with a diverse group of adult smokers. Trial Registration ClinicalTrials.gov: NCT01166464; http://clinicaltrials.gov/ct2/show/NCT01166464 (Archived by WebCite at http://www.webcitation.org/6IOE8XdE0). PMID:25098502
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Monitoring the Level of Students' GPAs over Time
ERIC Educational Resources Information Center
Bakir, Saad T.; McNeal, Bob
2010-01-01
A nonparametric (or distribution-free) statistical quality control chart is used to monitor the cumulative grade point averages (GPAs) of students over time. The chart is designed to detect any statistically significant positive or negative shifts in student GPAs from a desired target level. This nonparametric control chart is based on the…
MPCM: a hardware coder for super slow motion video sequences
NASA Astrophysics Data System (ADS)
Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.
2013-12-01
In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.
Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments
2017-06-13
reference frames enable a system designer to describe the position of any sensor or platform at any point of time. This section introduces the...analysis to evaluate the quality of reconstructions created by our algorithms. CloudCompare is an open-source tool designed for this purpose [65]. In...structure of the data. The data term seeks to keep the proposed solution (u) similar to the originally observed values ( f ). A systems designer must
Design and analysis of MEMS MWCNT/epoxy strain sensor using COMSOL
NASA Astrophysics Data System (ADS)
Sapra, Gaurav; Sharma, Preetika
2017-07-01
The design and performance of piezoresistive MEMS-based MWCNT/epoxy composite strain sensor using COMSOL Multiphysics Toolbox has been investigated. The proposed sensor design comprises su-8 based U-shaped cantilever beam with MWCNT/epoxy composite film as an active sensing element. A point load in microscale has been applied at the tip of the cantilever beam to observe its deflection in the proposed design. Analytical simulations have been performed to optimize various design parameters of the proposed sensor, which will be helpful at the time of fabrication.
Interrupted time series regression for the evaluation of public health interventions: a tutorial.
Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio
2017-02-01
Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.
Interrupted time series regression for the evaluation of public health interventions: a tutorial
Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio
2017-01-01
Abstract Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design. PMID:27283160
Solar Glare Hazard Analysis Tool v. 4.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford L.; Sims, Cianan
SGHAT predicts the occurrence and intensity of glare caused by a user-specified solar panel array when viewed from one or more observation points. An interactive mapping interface is used to determine the latitude, longitude and elevation of the array and observation points. The presence and intensity of glare is then calculated along a given time interval throughout the year, based on the position of the sun. The potential ocular hazard is also reported. The maximum energy production of the solar array is also estimated so that alternative designs can be compared to determine the design that yields the most energymore » production while mitigating glare.« less
2011 NASA Lunabotics Mining Competition for Universities: Results and Lessons Learned
NASA Technical Reports Server (NTRS)
Mueller, Robert P.; Murphy, Gloria A.
2011-01-01
Overview: Design, build & compete remote controlled robot (Lunabot). Excavate Black Point 1 (BP-1) Lunar Simulant. Deposit minimum of 10 kg of BP-1 within 15 minutes $5000, $2500, $1000 Scholarships for most BP-1 excavated. May 23-28, 2011. Kennedy Space Center, FL. International Teams Allowed for the First Time. What is a Lunabot? a) Robot Controlled Remotely or Autonomously. b) Visual and Auditory Isolation from Operator. c) Excavates Black Point 1 (BP-l) Simulant. d) Weight Limit - 80 kg. e)Dimension Limits -1.5m width x .75m length x 2m height. f) Designed, Built and Tested by University Student Teams.
Solar Glaze Hazard Analysis Tool v. 3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford K.; Sims, Cianan A.
SGHAT predicts the occurrence and intensity of glare caused by a user-specified solar panel array when viewed from one or more observation points. An interactive mapping interface is used to determine the latitude, longitude and elevation of the array and observation points. The presence and intensity of glare is then calculated along a given time interval throughout the year, based on the position of the sun. The potential ocular hazard is also reported. The maximum energy production of the solar array is also estimated so that alternative designs can be compared to determine the design that yields the most energymore » production while mitigating glare.« less
Knopman, Debra S.; Voss, Clifford I.
1989-01-01
Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.
Advances in combined endoscopic fluorescence confocal microscopy and optical coherence tomography
NASA Astrophysics Data System (ADS)
Risi, Matthew D.
Confocal microendoscopy provides real-time high resolution cellular level images via a minimally invasive procedure. Results from an ongoing clinical study to detect ovarian cancer with a novel confocal fluorescent microendoscope are presented. As an imaging modality, confocal fluorescence microendoscopy typically requires exogenous fluorophores, has a relatively limited penetration depth (100 μm), and often employs specialized aperture configurations to achieve real-time imaging in vivo. Two primary research directions designed to overcome these limitations and improve diagnostic capability are presented. Ideal confocal imaging performance is obtained with a scanning point illumination and confocal aperture, but this approach is often unsuitable for real-time, in vivo biomedical imaging. By scanning a slit aperture in one direction, image acquisition speeds are greatly increased, but at the cost of a reduction in image quality. The design, implementation, and experimental verification of a custom multi-point-scanning modification to a slit-scanning multi-spectral confocal microendoscope is presented. This new design improves the axial resolution while maintaining real-time imaging rates. In addition, the multi-point aperture geometry greatly reduces the effects of tissue scatter on imaging performance. Optical coherence tomography (OCT) has seen wide acceptance and FDA approval as a technique for ophthalmic retinal imaging, and has been adapted for endoscopic use. As a minimally invasive imaging technique, it provides morphological characteristics of tissues at a cellular level without requiring the use of exogenous fluorophores. OCT is capable of imaging deeper into biological tissue (˜1-2 mm) than confocal fluorescence microscopy. A theoretical analysis of the use of a fiber-bundle in spectral-domain OCT systems is presented. The fiber-bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the optical coherence tomography data. However, the multi-mode characteristic of the fibers in the fiber-bundle affects the depth sensitivity of the imaging system. A description of light interference in a multi-mode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis.
Langley's CSI evolutionary model: Phase O
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Elliott, Kenny B.; Horta, Lucas G.; Bailey, Jim P.; Bruner, Anne M.; Sulla, Jeffrey L.; Won, John; Ugoletti, Roberto M.
1991-01-01
A testbed for the development of Controls Structures Interaction (CSI) technology to improve space science platform pointing is described. The evolutionary nature of the testbed will permit the study of global line-of-sight pointing in phases 0 and 1, whereas, multipayload pointing systems will be studied beginning with phase 2. The design, capabilities, and typical dynamic behavior of the phase 0 version of the CSI evolutionary model (CEM) is documented for investigator both internal and external to NASA. The model description includes line-of-sight pointing measurement, testbed structure, actuators, sensors, and real time computers, as well as finite element and state space models of major components.
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Pototzky, Anthony S.
1989-01-01
A theoretical basis and example calculations are given that demonstrate the relationship between the Matched Filter Theory approach to the calculation of time-correlated gust loads and Phased Design Load Analysis in common use in the aerospace industry. The relationship depends upon the duality between Matched Filter Theory and Random Process Theory and upon the fact that Random Process Theory is used in Phased Design Loads Analysis in determining an equiprobable loads design ellipse. Extensive background information describing the relevant points of Phased Design Loads Analysis, calculating time-correlated gust loads with Matched Filter Theory, and the duality between Matched Filter Theory and Random Process Theory is given. It is then shown that the time histories of two time-correlated gust load responses, determined using the Matched Filter Theory approach, can be plotted as parametric functions of time and that the resulting plot, when superposed upon the design ellipse corresponding to the two loads, is tangent to the ellipse. The question is raised of whether or not it is possible for a parametric load plot to extend outside the associated design ellipse. If it is possible, then the use of the equiprobable loads design ellipse will not be a conservative design practice in some circumstances.
Lee, Da-Sheng
2010-01-01
Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR) machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design.
A methodology for double patterning compliant split and design
NASA Astrophysics Data System (ADS)
Wiaux, Vincent; Verhaegen, Staf; Iwamoto, Fumio; Maenhoudt, Mireille; Matsuda, Takashi; Postnikov, Sergei; Vandenberghe, Geert
2008-11-01
Double Patterning allows to further extend the use of water immersion lithography at its maximum numerical aperture NA=1.35. Splitting of design layers to recombine through Double Patterning (DP) enables an effective resolution enhancement. Single polygons may need to be split up (cut) depending on the pattern density and its 2D content. The split polygons recombine at the so-called 'stitching points'. These stitching points may affect the yield due to the sensitivity to process variations. We describe a methodology to ensure a robust double patterning by identifying proper split- and design- guidelines. Using simulations and experimental data, we discuss in particular metal1 first interconnect layers of random LOGIC and DRAM applications at 45nm half-pitch (hp) and 32nm hp where DP may become the only timely patterning solution.
A smart market for nutrient credit trading to incentivize wetland construction
NASA Astrophysics Data System (ADS)
Raffensperger, John F.; Prabodanie, R. A. Ranga; Kostel, Jill A.
2017-03-01
Nutrient trading and constructed wetlands are widely discussed solutions to reduce nutrient pollution. Nutrient markets usually include agricultural nonpoint sources and municipal and industrial point sources, but these markets rarely include investors who construct wetlands to sell nutrient reduction credits. We propose a new market design for trading nutrient credits, with both point source and non-point source traders, explicitly incorporating the option of landowners to build nutrient removal wetlands. The proposed trading program is designed as a smart market with centralized clearing, done with an optimization. The market design addresses the varying impacts of runoff over space and time, and the lumpiness of wetland investments. We simulated the market for the Big Bureau Creek watershed in north-central Illinois. We found that the proposed smart market would incentivize wetland construction by assuring reasonable payments for the ecosystem services provided. The proposed market mechanism selects wetland locations strategically taking into account both the cost and nutrient removal efficiencies. The centralized market produces locational prices that would incentivize farmers to reduce nutrients, which is voluntary. As we illustrate, wetland builders' participation in nutrient trading would enable the point sources and environmental organizations to buy low cost nutrient credits.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Application of 3D Laser Scanning Technology in Complex Rock Foundation Design
NASA Astrophysics Data System (ADS)
Junjie, Ma; Dan, Lu; Zhilong, Liu
2017-12-01
Taking the complex landform of Tanxi Mountain Landscape Bridge as an example, the application of 3D laser scanning technology in the mapping of complex rock foundations is studied in this paper. A set of 3D laser scanning technologies are formed and several key engineering problems are solved. The first is 3D laser scanning technology of complex landforms. 3D laser scanning technology is used to obtain a complete 3D point cloud data model of the complex landform. The detailed and accurate results of the surveying and mapping decrease the measuring time and supplementary measuring times. The second is 3D collaborative modeling of the complex landform. A 3D model of the complex landform is established based on the 3D point cloud data model. The super-structural foundation model is introduced for 3D collaborative design. The optimal design plan is selected and the construction progress is accelerated. And the last is finite-element analysis technology of the complex landform foundation. A 3D model of the complex landform is introduced into ANSYS for building a finite element model to calculate anti-slide stability of the rock, and provides a basis for the landform foundation design and construction.
Portable Amplifier Design for a Novel EEG Monitor in Point-of-Care Applications.
Luan, Bo; Sun, Mingui; Jia, Wenyan
2012-01-01
The Electroencephalography (EEG) is a common diagnostic tool for neurological diseases and dysfunctions, such as epilepsy and insomnia. However, the current EEG technology cannot be utilized quickly and conveniently at the point of care due to the complex skin preparation procedures required and the inconvenient EEG data acquisition systems. This work presents a portable amplifier design that integrates a set of skin screw electrodes and a wireless data link. The battery-operated amplifier contains an instrumentation amplifier, two noninverting amplifiers, two high-pass filters, and a low-pass filter. It is able to magnify the EEG signals over 10,000 times and has a high impedance, low noise, small size and low weight. Our electrode and amplifier are ideal for point-of-care applications, especially during transportation of patients suffering from traumatic brain injury or stroke.
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
Monte Carlo Simulation of THz Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Blakey, P.
1997-01-01
Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.
Banerjee, Jayeeta; Bhattacharyya, Moushum
2011-12-01
There is a rapid shifting of media: from printed paper to computer screens. This transition is modifying the process of how we read and understand text. The efficiency of reading is dependent on how ergonomically the visual information is presented. Font types and size characteristics have been shown to affect reading. A detailed investigation of the effect of the font type and size on reading on computer screens has been carried out by using subjective, objective and physiological evaluation methods on young adults. A group of young participants volunteered for this study. Two types of fonts were used: Serif fonts (Times New Roman, Georgia, Courier New) and Sans serif fonts (Verdana, Arial, Tahoma). All fonts were presented in 10, 12 and 14 point sizes. This study used a 6 X 3 (font type X size) design matrix. Participants read 18 passages of approximately the same length and reading level on a computer monitor. Reading time, ranking and overall mental workload were measured. Eye movements were recorded by a binocular eye movement recorder. Reading time was minimum for Courier New l4 point. The participants' ranking was highest and mental workload was least for Verdana 14 point. The pupil diameter, fixation duration and gaze duration were least for Courier New 14 point. The present study recommends using 14 point sized fonts for reading on computer screen. Courier New is recommended for fast reading while for on screen presentation Verdana is recommended. The outcome of this study will help as a guideline to all the PC users, software developers, web page designers and computer industry as a whole.
NASA Astrophysics Data System (ADS)
Wu, Bin; Yin, Hongxi; Qin, Jie; Liu, Chang; Liu, Anliang; Shao, Qi; Xu, Xiaoguang
2016-09-01
Aiming at the increasing demand of the diversification services and flexible bandwidth allocation of the future access networks, a flexible passive optical network (PON) scheme combining time and wavelength division multiplexing (TWDM) with point-to-point wavelength division multiplexing (PtP WDM) overlay is proposed for the next-generation optical access networks in this paper. A novel software-defined optical distribution network (ODN) structure is designed based on wavelength selective switches (WSS), which can implement wavelength and bandwidth dynamical allocations and suits for the bursty traffic. The experimental results reveal that the TWDM-PON can provide 40 Gb/s downstream and 10 Gb/s upstream data transmission, while the PtP WDM-PON can support 10 GHz point-to-point dedicated bandwidth as the overlay complement system. The wavelengths of the TWDM-PON and PtP WDM-PON are allocated dynamically based on WSS, which verifies the feasibility of the proposed structure.
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Litt, Jonathan S.
2010-01-01
This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.
NASA Astrophysics Data System (ADS)
Andrus, Matthew
Stroke is a leading cause of death and disability in the United States, however, there remains no rapid diagnostic test for differentiating between ischemic and hemorrhagic stroke within the three-hour treatment window. Here we describe the design of a multiscale microfluidic module with an embedded time-of-flight nanosensor for the clinical diagnosis of stroke. The nanosensor described utilizes two synthetic pores in series, relying on resistive pulse sensing (RPS) to measure the passage of molecules through the time-of-flight tube. Once the nanosensor design was completed, a multiscale module to process patient samples and house the sensors was designed in a similar iterative process. This design utilized pillar arrays, called "pixels" to immobilize oligonucleotides from patient samples for ligase detection reactions (LDR) to be carried out. COMSOL simulations were performed to understand the operation and behavior of both the nanosensor and the modular chip once the designs were completed.
Adaptive Automation and Cue Invocation: The Effect of Cue Timing on Operator Error
2013-05-01
129. 5. Parasuraman, R. (2000). Designing automation for human use: Empirical studies and quantitative models. Ergonomics , 43, 931-951. 6...Prospective memory errors involve memory for intended actions that are planned to be performed at some designated point in the future [20]. In the DMOO...RESCHU) [21] was used in this study. A Navy pilot who is familiar with supervisory control tasks designed the RESCHU task and the task has been
Intelligent Agents for Design and Synthesis Environments: My Summary
NASA Technical Reports Server (NTRS)
Norvig, Peter
1999-01-01
This presentation gives a summary of intelligent agents for design synthesis environments. We'll start with the conclusions, and work backwards to justify them. First, an important assumption is that agents (whatever they are) are good for software engineering. This is especially true for software that operates in an uncertain, changing environment. The "real world" of physical artifacts is like that: uncertain in what we can measure, changing in that things are always breaking down, and we must interact with non-software entities. The second point is that software engineering techniques can contribute to good design. There may have been a time when we wanted to build simple artifacts containing little or no software. But modern aircraft and spacecraft are complex, and rely on a great deal of software. So better software engineering leads to better designed artifacts, especially when we are designing a series of related artifacts and can amortize the costs of software development. The third point is that agents are especially useful for design tasks, above and beyond their general usefulness for software engineering, and the usefulness of software engineering to design.
Francis A. Roesch
2012-01-01
In the past, the goal of forest inventory was to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design included selection probabilities based on land area observed at a discrete point in time. That is, time was not...
A systematic FPGA acceleration design for applications based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao
2018-04-01
Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.
McLaughlin, Paula M; Curtis, Ashley F; Branscombe-Caird, Laura M; Comrie, Janna K; Murtha, Susan J E
2018-02-01
To investigate whether a commercially available brain training program is feasible to use with a middle-aged population and has a potential impact on cognition and emotional well-being (proof of concept). Fourteen participants (ages 46-55) completed two 6-week training conditions using a crossover (counterbalanced) design: (1) experimental brain training condition and (2) active control "find answers to trivia questions online" condition. A comprehensive neurocognitive battery and a self-report measure of depression and anxiety were administered at baseline (first time point, before training) and after completing each training condition (second time point at 6 weeks, and third time point at 12 weeks). Cognitive composite scores were calculated for participants at each time point. Study completion and protocol adherence demonstrated good feasibility of this brain training protocol in healthy middle-aged adults. Exploratory analyses suggested that brain training was associated with neurocognitive improvements related to executive attention, as well as improvements in mood. Overall, our findings suggest that brain training programs are feasible in middle-aged cohorts. We propose that brain training games may be linked to improvements in executive attention and affect by promoting cognitive self-efficacy in middle-aged adults.
Law, Mary; Anaby, Dana; Imms, Christine; Teplicky, Rachel; Turner, Laura
2015-04-01
Youth with physical disabilities experience restrictions to participation in community-based leisure activities; however, there is little evidence about how to improve their involvement. This study examined whether an intervention to remove environmental barriers and develop strategies using a coaching approach improved youth participation in leisure activities. An Interrupted Time Series design was employed, where replication of the intervention effect was examined across individualised participation goals and across participants. Six adolescents with a physical disability participated in a 12-week intervention. An occupational therapist worked with each youth and his/her family to set three leisure goals based on problems identified using the Canadian Occupational Performance Measure (COPM). A coaching approach was used to collaboratively identify and implement strategies to remove environmental barriers. Interventions for each goal were introduced at different time points. Outcomes were evaluated using the COPM. Improvements in COPM performance scores were clinically significant for 83% of the identified activities; an average change of 4.5 points in the performance scale (SD = 1.95) was observed. Statistical analysis using the celeration line demonstrated that the proportion of data points falling above the line increased in the intervention phase for 94% of the activities, indicating a significant treatment effect. This study is the first to examine an intervention aimed at increasing leisure participation by changing only the environment. The results indicate that environment-focussed interventions are feasible and effective in promoting youth participation. Such findings can inform the design of a larger study and guide occupational therapy practice. © 2015 Occupational Therapy Australia.
NASA Astrophysics Data System (ADS)
Garg, Amit Kumar; Madavi, Amresh Ashok; Janyani, Vijay
2017-02-01
A flexible hybrid wavelength division multiplexing-time division multiplexing passive optical network architecture that allows dual rate signals to be sent at 1 and 10 Gbps to each optical networking unit depending upon the traffic load is proposed. The proposed design allows dynamic wavelength allocation with pay-as-you-grow deployment capability. This architecture is capable of providing up to 40 Gbps of equal data rates to all optical distribution networks (ODNs) and up to 70 Gbps of a asymmetrical data rate to the specific ODN. The proposed design handles broadcasting capability with simultaneous point-to-point transmission, which further reduces energy consumption. In this architecture, each module sends a wavelength to each ODN, thus making the architecture fully flexible; this flexibility allows network providers to use only required OLT components and switch off others. The design is also reliable to any module or TRx failure and provides services without any service disruption. Dynamic wavelength allocation and pay-as-you-grow deployment support network extensibility and bandwidth scalability to handle future generation access networks.
Characteristic Boundary Conditions for ARO-1
1983-05-01
I As shown in Fig. 3, the point designated II is the interior point that was used to define the barred coordinate system , evaluated at time t=. All...L. Jacocks Calspan Field Services, Inc. May 1983 Final Report for Period October 1981 - September 1982 r Approved for public release; destribut ...on unlimited I ARNOLD ENGINEERING DEVELOPMENT CENTER ARNOLD AIR FORCE STATION, TENNESSEE AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE N O T I
Survey State of the Art: Electrical Load Management Techniques and Equipment.
1986-10-31
automobiles and even appliances. Applications in the area of demand and energy management have been multifaceted, given the needs involved and rapid paybacks...copy of the programming to be reloaded into the controller at any time and by designing this module with erasable and reprogrammable memory, the...points and performs DDC programming is stored in (direct digital control) of output reprogrammable , permanent memory points. A RIM may accommodate up
NASA Astrophysics Data System (ADS)
Elliott, Emily A.; Monbureau, Elaine; Walters, Glenn W.; Elliott, Mark A.; McKee, Brent A.; Rodriguez, Antonio B.
2017-12-01
Identifying the source and abundance of sediment transported within tidal creeks is essential for studying the connectivity between coastal watersheds and estuaries. The fine-grained suspended sediment load (SSL) makes up a substantial portion of the total sediment load carried within an estuarine system and efficient sampling of the SSL is critical to our understanding of nutrient and contaminant transport, anthropogenic influence, and the effects of climate. Unfortunately, traditional methods of sampling the SSL, including instantaneous measurements and automatic samplers, can be labor intensive, expensive and often yield insufficient mass for comprehensive geochemical analysis. In estuaries this issue is even more pronounced due to bi-directional tidal flow. This study tests the efficacy of a time-integrated mass sediment sampler (TIMS) design, originally developed for uni-directional flow within the fluvial environment, modified in this work for implementation the tidal environment under bi-directional flow conditions. Our new TIMS design utilizes an 'L' shaped outflow tube to prevent backflow, and when deployed in mirrored pairs, each sampler collects sediment uniquely in one direction of tidal flow. Laboratory flume experiments using dye and particle image velocimetry (PIV) were used to characterize the flow within the sampler, specifically, to quantify the settling velocities and identify stagnation points. Further laboratory tests of sediment indicate that bidirectional TIMS capture up to 96% of incoming SSL across a range of flow velocities (0.3-0.6 m s-1). The modified TIMS design was tested in the field at two distinct sampling locations within the tidal zone. Single-time point suspended sediment samples were collected at high and low tide and compared to time-integrated suspended sediment samples collected by the bi-directional TIMS over the same four-day period. Particle-size composition from the bi-directional TIMS were representative of the array of single time point samples, but yielded greater mass, representative of flow and sediment-concentration conditions at the site throughout the deployment period. This work proves the efficacy of the modified bi-directional TIMS design, offering a novel tool for collection of suspended sediment in the tidally-dominated portion of the watershed.
Control-enhanced multiparameter quantum estimation
NASA Astrophysics Data System (ADS)
Liu, Jing; Yuan, Haidong
2017-10-01
Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.
ERIC Educational Resources Information Center
Adolph, Karen E.; Robinson, Scott R.
2011-01-01
Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…
Traffic data collection and anonymous vehicle detection using wireless sensor networks.
DOT National Transportation Integrated Search
2012-05-01
New traffic sensing devices based on wireless sensing technologies were designed and tested. Such devices encompass a cost-effective, battery-free, and energy self-sustained architecture for real-time traffic measurement over distributed points in a ...
Hilton, G; Unsworth, C A; Murphy, G C; Browne, M; Olver, J
2017-08-01
Longitudinal cohort design. First, to explore the longitudinal outcomes for people who received early intervention vocational rehabilitation (EIVR); second, to examine the nature and extent of relationships between contextual factors and employment outcomes over time. Both inpatient and community-based clients of a Spinal Community Integration Service (SCIS). People of workforce age undergoing inpatient rehabilitation for traumatic spinal cord injury were invited to participate in EIVR as part of SCIS. Data were collected at the following three time points: discharge and at 1 year and 2+ years post discharge. Measures included the spinal cord independence measure, hospital anxiety and depression scale, impact on participation and autonomy scale, numerical pain-rating scale and personal wellbeing index. A range of chi square, correlation and regression tests were undertaken to look for relationships between employment outcomes and demographic, emotional and physical characteristics. Ninety-seven participants were recruited and 60 were available at the final time point where 33% (95% confidence interval (CI): 24-42%) had achieved an employment outcome. Greater social participation was strongly correlated with wellbeing (ρ=0.692), and reduced anxiety (ρ=-0.522), depression (ρ=-0.643) and pain (ρ=-0.427) at the final time point. In a generalised linear mixed effect model, education status, relationship status and subjective wellbeing increased significantly the odds of being employed at the final time point. Tertiary education prior to injury was associated with eight times increased odds of being in employment at the final time point; being in a relationship at the time of injury was associated with increased odds of being in employment of more than 3.5; subjective wellbeing, while being the least powerful predictor was still associated with increased odds (1.8 times) of being employed at the final time point. EIVR shows promise in delivering similar return-to-work rates as those traditionally reported, but sooner. The dynamics around relationships, subjective wellbeing, social participation and employment outcomes require further exploration.
Testing of the BipiColombo Antenna Pointing Mechanism
NASA Astrophysics Data System (ADS)
Campo, Pablo; Barrio, Aingeru; Martin, Fernando
2015-09-01
BepiColombo is an ESA mission to Mercury, its planetary orbiter (MPO) has two antenna pointing mechanism, High gain antenna (HGA) pointing mechanism steers and points a large reflector which is integrated at system level by TAS-I Rome. Medium gain antenna (MGA) APM points a 1.5 m boom with a horn antenna. Both radiating elements are exposed to sun fluxes as high as 10 solar constants without protections.A previous paper [1] described the design and development process to solve the challenges of performing in harsh environment.. Current paper is focused on the testing process of the qualification units. Testing performance of antenna pointing mechanism in its specific environmental conditions has required special set-up and techniques. The process has provided valuable feedback on the design and the testing methods which have been included in the PFM design and tests.Some of the technologies and components were developed on dedicated items priort to EQM, but once integrated, test behaviour had relevant differences.Some of the major concerns for the APM testing are:- Create during the thermal vacuum testing the qualification temperature map with gradients along the APM. From of 200oC to 70oC.- Test in that conditions the radio frequency and pointing performances adding also high RF power to check the power handling and self-heating of the rotary joint.- Test in life up to 12000 equivalent APM revolutions, that is 14.3 million motor revolutions in different thermal conditions.- Measure low thermal distortion of the mechanical chain, being at the same time insulated from external environment and interfaces (55 arcsec pointing error)- Perform deployment of large items guaranteeing during the process low humidity, below 5% to protect dry lubrication- Verify stability with representative inertia of large boom or reflector 20 Kgm2.
Design and Laboratory Testing of a Prototype Linear Temperature Sensor
1982-07-01
computer, critical quantities such as the line sensor’s voltage, vertical position and, occasionally, a point sensor were also monitored in real time on a...REUT.............. ........... * 30 5.1 Linearity - Comparison With Thoy............... 31 5.2 Response Time ...from some initial time t 0 is more relevant to the measurement of internal waves (since the second term in 0 the above equation is usually small
ERIC Educational Resources Information Center
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim
2014-01-01
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Signal processing of anthropometric data
NASA Astrophysics Data System (ADS)
Zimmermann, W. J.
1983-09-01
The Anthropometric Measurements Laboratory has accumulated a large body of data from a number of previous experiments. The data is very noisy, therefore it requires the application of some signal processing schemes. Moreover, it was not regarded as time series measurements but as positional information; hence, the data is stored as coordinate points as defined by the motion of the human body. The accumulated data defines two groups or classes. Some of the data was collected from an experiment designed to measure the flexibility of the limbs, referred to as radial movement. The remaining data was collected from experiments designed to determine the surface of the reach envelope. An interactive signal processing package was designed and implemented. Since the data does not include time this package does not include a time series element. Presently the results is restricted to processing data obtained from those experiments designed to measure flexibility.
Signal processing of anthropometric data
NASA Technical Reports Server (NTRS)
Zimmermann, W. J.
1983-01-01
The Anthropometric Measurements Laboratory has accumulated a large body of data from a number of previous experiments. The data is very noisy, therefore it requires the application of some signal processing schemes. Moreover, it was not regarded as time series measurements but as positional information; hence, the data is stored as coordinate points as defined by the motion of the human body. The accumulated data defines two groups or classes. Some of the data was collected from an experiment designed to measure the flexibility of the limbs, referred to as radial movement. The remaining data was collected from experiments designed to determine the surface of the reach envelope. An interactive signal processing package was designed and implemented. Since the data does not include time this package does not include a time series element. Presently the results is restricted to processing data obtained from those experiments designed to measure flexibility.
Hybrid Propulsion Technology Program, phase 1. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1989-01-01
The study program was contracted to evaluate concepts of hybrid propulsion, select the most optimum, and prepare a conceptual design package. Further, this study required preparation of a technology definition package to identify hybrid propulsion enabling technologies and planning to acquire that technology in Phase 2 and demonstrate that technology in Phase 3. Researchers evaluated two design philosophies for Hybrid Rocket Booster (HRB) selection. The first is an ASRM modified hybrid wherein as many components/designs as possible were used from the present Advanced Solid Rocket Motor (ASRM) design. The second was an entirely new hybrid optimized booster using ASRM criteria as a point of departure, i.e., diameter, thrust time curve, launch facilities, and external tank attach points. Researchers selected the new design based on the logic of optimizing a hybrid booster to provide NASA with a next generation vehicle in lieu of an interim advancement over the ASRM. The enabling technologies for hybrid propulsion are applicable to either and vehicle design may be selected at a downstream point (Phase 3) at NASA's discretion. The completion of these studies resulted in ranking the various concepts of boosters from the RSRM to a turbopump fed (TF) hybrid. The scoring resulting from the Figure of Merit (FOM) scoring system clearly shows a natural growth path where the turbopump fed solid liquid staged combustion hybrid provides maximized payload and the highest safety, reliability, and low life cycle costing.
Experimental study of influence characteristics of flue gas fly ash on acid dew point
NASA Astrophysics Data System (ADS)
Song, Jinhui; Li, Jiahu; Wang, Shuai; Yuan, Hui; Ren, Zhongqiang
2017-12-01
The long-term operation experience of a large number of utility boilers shows that the measured value of acid dew point is generally lower than estimated value. This is because the influence of CaO and MgO on acid dew point in flue gas fly ash is not considered in the estimation formula of acid dew point. On the basis of previous studies, the experimental device for acid dew point measurement was designed and constructed, and the acid dew point under different smoke conditions was measured. The results show that the CaO and MgO in the flue gas fly ash have an obvious influence on the acid dew point, and the content of the fly ash is negatively correlated with the temperature of acid dew point At the same time, the concentration of H2SO4 in flue gas is different, and the acid dew point of flue gas is different, and positively correlated with the acid dew point.
[Myofascial pain syndrome--frequent occurrence and often misdiagnosed].
Pongratz, D E; Späth, M
1998-09-30
Myofascial pain syndrome (MPS) is a very common localized--sometimes also polytopic--painful musculoskeletal condition associated with trigger points, for which, however, diagnostic criteria established in well-designed studies are still lacking. These two facts form the basis for differentiating between MPS and the fibromyalgia syndrome. The difference between trigger points (MPS) and tender points (fibromyalgia) is of central importance--not merely in a linguistic sense. A knowledge of the signs and symptoms typically associated with a trigger point often obviates the need for time-consuming and expensive technical diagnostic measures. The assumption that many cases of unspecific complaints affecting the musculoskeletal system may be ascribed to MPS makes clear the scope for the saving of costs.
Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael
2016-01-01
It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.
Gehrke, Sergio Alexandre; Marin, Giovanni Wiel
2015-05-01
The objective of this study was to investigate the effect of implant design on stability and resistance to reverse torque in the tibia of rabbits. Three test groups were prepared using the different characteristics of each implant model: square threads with progressive depth to the apex, a cervical portion without threads and a self-tapping system that is quite pronounced and aggressive (Group 1); triangular threads with flat tips with increasing thread depth from the cervical portion to the apex and a small self-tapping portion with a short thread pitch (Group 2); long thread pitch, progressive thread depth, an apical area with a small self-tapping portion (Group 3). For the two last groups, a final single-use drill was provided for each implant. Nine rabbits received 54 conical implants with a same surface treatment. The resonance frequency was analysed four times (0, 6, 8 and 12 weeks), and removal torque values were measured at three time intervals after the implantations (6, 8 and 12 weeks). In comparing the implant stability quotient at the four time points, highly significant statistic differences were found (p = 1.29(-10)). The reverse torque at the three time points was also significantly different among the groups (p = 0.00015). The implants of Group 2, with seemingly less aggressive design, more quickly reached high values of stability and removal torque. Under the limitations of this study, however, it is possible that in cases in which there may be low osseointegration response, the implant design should be evaluated. Copyright © 2014 Elsevier GmbH. All rights reserved.
How to Construct a Mixed Methods Research Design.
Schoonenboom, Judith; Johnson, R Burke
2017-01-01
This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.
Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis
NASA Technical Reports Server (NTRS)
Arkin, C.; Gillespie, Stacey; Ratzel, Christopher
2010-01-01
A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.
The MUSE project face to face with reality
NASA Astrophysics Data System (ADS)
Caillier, P.; Accardo, M.; Adjali, L.; Anwand, H.; Bacon, Roland; Boudon, D.; Brotons, L.; Capoani, L.; Daguisé, E.; Dupieux, M.; Dupuy, C.; François, M.; Glindemann, A.; Gojak, D.; Hansali, G.; Hahn, T.; Jarno, A.; Kelz, A.; Koehler, C.; Kosmalski, J.; Laurent, F.; Le Floch, M.; Lizon, J.-L.; Loupias, M.; Manescau, A.; Migniau, J. E.; Monstein, C.; Nicklas, H.; Parès, L.; Pécontal-Rousset, A.; Piqueras, L.; Reiss, R.; Remillieux, A.; Renault, E.; Rupprecht, G.; Streicher, O.; Stuik, R.; Valentin, H.; Vernet, J.; Weilbacher, P.; Zins, G.
2012-09-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument built for ESO (European Southern Observatory) to be installed in Chile on the VLT (Very Large Telescope). The MUSE project is supported by a European consortium of 7 institutes. After the critical turning point of shifting from the design to the manufacturing phase, the MUSE project has now completed the realization of its different sub-systems and should finalize its global integration and test in Europe. To arrive to this point many challenges had to be overcome, many technical difficulties, non compliances or procurements delays which seemed at the time overwhelming. Now is the time to face the results of our organization, of our strategy, of our choices. Now is the time to face the reality of the MUSE instrument. During the design phase a plan was provided by the project management in order to achieve the realization of the MUSE instrument in specification, time and cost. This critical moment in the project life when the instrument takes shape and reality is the opportunity to look not only at the outcome but also to see how well we followed the original plan, what had to be changed or adapted and what should have been.
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Chen, Tao
2018-06-01
To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.
Collins, Dannie L.; Flynn, Kathleen M.
1979-01-01
This report summarizes and makes available to other investigators the measured hydraulic data collected during a series of experiments designed to study the effect of patterned bed roughness on steady and unsteady open-channel flow. The patterned effect of the roughness was obtained by clear-cut mowing of designated areas of an otherwise fairly dense coverage of coastal Bermuda grass approximately 250 mm high. All experiments were conducted in the Flood Plain Simulation Facility during the period of October 7 through December 12, 1974. Data from 18 steady flow experiments and 10 unsteady flow experiments are summarized. Measured data included are ground-surface elevations, grass heights and densities, water-surface elevations and point velocities for all experiments. Additional tables of water-surface elevations and measured point velocities are included for the clear-cut areas for most experiments. One complete set of average water-surface elevations and one complete set of measured point velocities are tabulated for each steady flow experiment. Time series data, on a 2-minute time interval, are tabulated for both water-surface elevations and point velocities for each unsteady flow experiment. All data collected, including individual records of water-surface elevations for the steady flow experiments, have been stored on computer disk storage and can be retrieved using the computer programs listed in the attachment to this report. (Kosco-USGS)
Pérez Suárez, Santiago T.; Travieso González, Carlos M.; Alonso Hernández, Jesús B.
2013-01-01
This article presents a design methodology for designing an artificial neural network as an equalizer for a binary signal. Firstly, the system is modelled in floating point format using Matlab. Afterward, the design is described for a Field Programmable Gate Array (FPGA) using fixed point format. The FPGA design is based on the System Generator from Xilinx, which is a design tool over Simulink of Matlab. System Generator allows one to design in a fast and flexible way. It uses low level details of the circuits and the functionality of the system can be fully tested. System Generator can be used to check the architecture and to analyse the effect of the number of bits on the system performance. Finally the System Generator design is compiled for the Xilinx Integrated System Environment (ISE) and the system is described using a hardware description language. In ISE the circuits are managed with high level details and physical performances are obtained. In the Conclusions section, some modifications are proposed to improve the methodology and to ensure portability across FPGA manufacturers.
A genuine nonlinear approach for controller design of a boiler-turbine system.
Yang, Shizhong; Qian, Chunjiang; Du, Haibo
2012-05-01
This paper proposes a genuine nonlinear approach for controller design of a drum-type boiler-turbine system. Based on a second order nonlinear model, a finite-time convergent controller is first designed to drive the states to their setpoints in a finite time. In the case when the state variables are unmeasurable, the system will be regulated using a constant controller or an output feedback controller. An adaptive controller is also designed to stabilize the system since the model parameters may vary under different operating points. The novelty of the proposed controller design approach lies in fully utilizing the system nonlinearities instead of linearizing or canceling them. In addition, the newly developed techniques for finite-time convergent controller are used to guarantee fast convergence of the system. Simulations are conducted under different cases and the results are presented to illustrate the performance of the proposed controllers. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Women's HIV Disclosure to Family and Friends
Craft, Shonda M.; Reed, Sandra J.
2012-01-01
Abstract Previous researchers have documented rates of HIV disclosure to family at discrete time periods, yet none have taken a dynamic approach to this phenomenon. The purpose of this study is to take the next step and provide a retrospective comparison of rates of women's HIV disclosure to family and friends over a 15-year time span. Of particular interest are the possible influences of social network and relationship characteristics on the time-to-disclosure of serostatus. Time-to-disclosure was analyzed from data provided by 125 HIV-positive women. Participants were primarily married or dating (42%), unemployed (79.2%), African American (68%) women with a high school diploma or less (54.4%). Length of time since diagnosis ranged from 1 month to over 19 years (M=7.1 years). Results pointed to statistically significant differences in time-to-disclosure between family, friends, and sexual partners. Additionally, females and persons with whom the participant had more frequent contact were more likely to be disclosed to, regardless of the type of relationship. The results of this study underscore possible challenges with existing studies which have employed point prevalence designs, and point to new methods which could be helpful in family research. PMID:22313348
Fine pointing control for a Next-Generation Space Telescope
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
The Next Generation Space Telescope will provide at least ten times the collecting area of the Hubble Space Telescope in a package that fits into the shroud of an expendable launch vehicle. The resulting large, flexible structure provides a challenge to the design of a pointing control system for which the requirements are at the milli-arcsecond level. This paper describes a design concept in which pointing stability is achieved by means of a nested-loop design involving an inertial attitude control system (ACS) and a fast steering mirror (FSM). A key to the integrated control design is that the ACS controllers has a bandwidth well below known structural modes and the FSM uses a rotationally balanced mechanism which should not interact with the flexible modes that are within its control bandwidth. The ACS controller provides stable pointing of the spacecraft bus with star trackers and gyros. This low bandwidth loop uses nearly co-located sensors and actuators to slew and acquire faint guide stars in the NIR camera. This controller provides a payload reference stable to the arcsecond level. Low-frequency pointing errors due to sensor noise and dynamic disturbances are suppressed by a 2-axis gimbaled FSM locate din the instrument module. The FSM servo bandwidth of 6 Hz is intended to keep the guide star position stable in the NIR focal plane to the required milli-arcsecond level. The mirror is kept centered in its range of travel by a low-bandwidth loop closed around the ACS. This paper presents the result of parametric trade studies designed to assess the performance of this control design in the presence of modeled reaction wheel disturbances, assumed to be the principle source of vibration for the NGST, and variations in structural dynamics. Additionally, requirements for reaction wheel disturbance levels and potential vibration isolation subsystems were developed.
Design of the Annular Suspension and Pointing System (ASPS) (including design addendum)
NASA Technical Reports Server (NTRS)
Cunningham, D.; Gismondi, T.; Hamilton, B.; Kendig, J.; Kiedrowski, J.; Vroman, A.; Wilson, G.
1980-01-01
The Annular Suspension and Pointing System is an experiment pointing mount designed for extremely precise 3 axis orientation of shuttle experiments. It utilizes actively controlled magnetic bearing to provide noncontacting vernier pointing and translational isolation of the experiment. The design of the system is presented and analyzed.
Geolocation and Pointing Accuracy Analysis for the WindSat Sensor
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.
2006-01-01
Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.
Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.
Flassig, R J; Sundmacher, K
2012-12-01
Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.
EBEX: A Balloon-Borne Telescope for Measuring Cosmic Microwave Background Polarization
NASA Astrophysics Data System (ADS)
Chapman, Daniel
2015-05-01
EBEX is a long-duration balloon-borne (LDB) telescope designed to probe polarization signals in the cosmic microwave background (CMB). It is designed to measure or place an upper limit on the inflationary B-mode signal, a signal predicted by inflationary theories to be imprinted on the CMB by gravitational waves, to detect the effects of gravitational lensing on the polarization of the CMB, and to characterize polarized Galactic foreground emission. The payload consists of a pointed gondola that houses the optics, polarimetry, detectors and detector readout systems, as well as the pointing sensors, control motors, telemetry sytems, and data acquisition and flight control computers. Polarimetry is achieved with a rotating half-wave plate and wire grid polarizer. The detectors are sensitive to frequency bands centered on 150, 250, and 410 GHz. EBEX was flown in 2009 from New Mexico as a full system test, and then flown again in December 2012 / January 2013 over Antarctica in a long-duration flight to collect scientific data. In the instrumentation part of this thesis we discuss the pointing sensors and attitude determination algorithms. We also describe the real-time map making software, "QuickLook", that was custom-designed for EBEX. We devote special attention to the design and construction of the primary pointing sensors, the star cameras, and their custom-designed flight software package, "STARS" (the Star Tracking Attitude Reconstruction Software). In the analysis part of this thesis we describe the current status of the post-flight analysis procedure. We discuss the data structures used in analysis and the pipeline stages related to attitude determination and map making. We also discuss a custom-designed software framework called "LEAP" (the LDB EBEX Analysis Pipeline) that supports most of the analysis pipeline stages.
Software Design Document MCC CSCI (1). Volume 2, Sections 2.18.1 - 2.22
1991-06-01
tparam pointer to long mnt +Standard C type. _____________________ Internal Variables _____________ Variable Type Where Typedef Declared td ...ist points to the last transaction pointer on the TimeList. The function call is AssocAddToStart~ffimieList( td , startTimeList, endTimeList). Table 2.20...42 describes the parameters used by this function. Parameters Parameter Type______ Where Typedef Declared td pointer to /simnetllibsrc/libassoc/assoc
Focused Logistics and Support for Force Projection in Force XXI and Beyond
1999-12-09
business system linking trading partners with point of sale demand and real time manufacturing for clothing items.17 Quick Response achieved $1.7...be able to determine the real - time status and supply requirements of units. With "distributed logistics system software model hosts൨ and active...location, quantity, condition, and movement of assets. The system is designed to be fully automated, operate in near- real time with an open-architecture
Lee, Da-Sheng
2010-01-01
Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR) machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design. PMID:22315563
NASA Technical Reports Server (NTRS)
VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.
2014-01-01
Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.
Precise control of flexible manipulators
NASA Technical Reports Server (NTRS)
Cannon, R. H., Jr.; Bindford, T. O.; Schmitz, E.
1984-01-01
The design and experimental testing of end point position controllers for a very flexible one link lightweight manipulator are summarized. The latest upgraded version of the experimental set up, and the basic differences between conventional joint angle feedback and end point position feedback are described. A general procedure for application of modern control methods to the problem is outlined. The relationship between weighting parameters and the bandwidth and control stiffness of the resulting end point position closed loop system is shown. It is found that joint rate angle feedback in addition to the primary end point position sensor is essential for adequate disturbance rejection capability of the closed loop system. The use of a low order multivariable compensator design computer code; called Sandy is documented. A solution to the problem of control mode switching between position sensor sets is outlined. The proof of concept for endpoint position feedback for a one link flexible manipulator was demonstrated. The bandwidth obtained with the experimental end point position controller is about twice as fast as the beam's first natural cantilevered frequency, and comes within a factor of four of the absolute physical speed limit imposed by the wave propagation time of the beam.
Pointing control for the International Comet Mission
NASA Technical Reports Server (NTRS)
Leblanc, D. R.; Schumacher, L. L.
1980-01-01
The design of the pointing control system for the proposed International Comet Mission, intended to fly by Comet Halley and rendezvous with Comet Tempel-2 is presented. Following a review of mission objectives and the spacecraft configuration, design constraints on the pointing control system controlling the two-axis gimballed scan platform supporting the science instruments are discussed in relation to the scientific requirements of the mission. The primary design options considered for the pointing control system design for the baseline spacecraft are summarized, and the design selected, which employs a target-referenced, inertially stabilized control system, is described in detail. The four basic modes of operation of the pointing control subsystem (target acquisition, inertial hold, target track and slew) are discussed as they relate to operations at Halley and Tempel-2. It is pointed that the pointing control system design represents a significant advance in the state of the art of pointing controls for planetary missions.
Missileborne Artificial Vision System (MAVIS)
NASA Technical Reports Server (NTRS)
Andes, David K.; Witham, James C.; Miles, Michael D.
1994-01-01
Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.
Androgen deprivation and stem cell markers in prostate cancers
Tang, Yao; Hamburger, Anne W; Wang, Linbo; Khan, Mohammad Afnan; Hussain, Arif
2010-01-01
In our previous studies using human LNCaP xenografts and TRAMP (transgenic adenocarcinoma of mouse prostate) mice, androgen deprivation therapy (ADT) resulted in a temporary cessation of prostate cancer (PCa) growth, but then tumors grew faster with more malignant behaviour. To understand whether cancer stem cells might play a role in PCa progression in these animal models, we investigated the expressions of stem cell-related markers in tumors at different time points after ADT. In both animal models, enhanced expressions of stem cell markers were observed in tumors of castrated mice, as compared to non-castrated controls. This increased cell population that expressed stem cell markers is designated as stem-like cells (SLC) in this article. We also observed that the SLC peaked at relatively early time points after ADT, before tumors resumed their growth. These results suggest that the SLC population may play a role in tumor re-growth and disease progression, and that targeting the SLC at their peak-expression time point may prevent tumor recurrence following ADT. PMID:20126580
A model for incomplete longitudinal multivariate ordinal data.
Liu, Li C
2008-12-30
In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.
A field test of three LQAS designs to assess the prevalence of acute malnutrition.
Deitchler, Megan; Valadez, Joseph J; Egge, Kari; Fernandez, Soledad; Hennigan, Mary
2007-08-01
The conventional method for assessing the prevalence of Global Acute Malnutrition (GAM) in emergency settings is the 30 x 30 cluster-survey. This study describes alternative approaches: three Lot Quality Assurance Sampling (LQAS) designs to assess GAM. The LQAS designs were field-tested and their results compared with those from a 30 x 30 cluster-survey. Computer simulations confirmed that small clusters instead of a simple random sample could be used for LQAS assessments of GAM. Three LQAS designs were developed (33 x 6, 67 x 3, Sequential design) to assess GAM thresholds of 10, 15 and 20%. The designs were field-tested simultaneously with a 30 x 30 cluster-survey in Siraro, Ethiopia during June 2003. Using a nested study design, anthropometric, morbidity and vaccination data were collected on all children 6-59 months in sampled households. Hypothesis tests about GAM thresholds were conducted for each LQAS design. Point estimates were obtained for the 30 x 30 cluster-survey and the 33 x 6 and 67 x 3 LQAS designs. Hypothesis tests showed GAM as <10% for the 33 x 6 design and GAM as > or =10% for the 67 x 3 and Sequential designs. Point estimates for the 33 x 6 and 67 x 3 designs were similar to those of the 30 x 30 cluster-survey for GAM (6.7%, CI = 3.2-10.2%; 8.2%, CI = 4.3-12.1%, 7.4%, CI = 4.8-9.9%) and all other indicators. The CIs for the LQAS designs were only slightly wider than the CIs for the 30 x 30 cluster-survey; yet the LQAS designs required substantially less time to administer. The LQAS designs provide statistically appropriate alternatives to the more time-consuming 30 x 30 cluster-survey. However, additional field-testing is needed using independent samples rather than a nested study design.
Coping strategies among patients with newly diagnosed amyotrophic lateral sclerosis.
Jakobsson Larsson, Birgitta; Nordin, Karin; Askmark, Håkan; Nygren, Ingela
2014-11-01
To prospectively identify different coping strategies among newly diagnosed amyotrophic lateral sclerosis patients and whether they change over time and to determine whether physical function, psychological well-being, age and gender correlated with the use of different coping strategies. Amyotrophic lateral sclerosis is a fatal disease with impact on both physical function and psychological well-being. Different coping strategies are used to manage symptoms and disease progression, but knowledge about coping in newly diagnosed amyotrophic lateral sclerosis patients is scarce. This was a prospective study with a longitudinal and descriptive design. A total of 33 patients were included and evaluation was made at two time points, one to three months and six months after diagnosis. Patients were asked to complete the Motor Neuron Disease Coping Scale and the Hospital Anxiety and Depression Scale. Physical function was estimated using the revised Amyotrophic Lateral Sclerosis Functional Rating Scale. The most commonly used strategies were support and independence. Avoidance/venting and information seeking were seldom used at both time points. The use of information seeking decreased between the two time points. Men did not differ from women, but patients ≤64 years used positive action more often than older patients. Amyotrophic Lateral Sclerosis Functional Rating Scale was positively correlated with positive action at time point 1, but not at time point 2. Patients' psychological well-being was correlated with the use of different coping strategies. Support and independence were the most used coping strategies, and the use of different strategies changed over time. Psychological well-being was correlated with different coping strategies in newly diagnosed amyotrophic lateral sclerosis patients. The knowledge about coping strategies in early stage of the disease may help the nurses to improve and develop the care and support for these patients. © 2014 John Wiley & Sons Ltd.
Omisore, Olatunji Mumini; Han, Shipeng; Ren, Lingxue; Zhang, Nannan; Ivanov, Kamen; Elazab, Ahmed; Wang, Lei
2017-08-01
Snake-like robot is an emerging form of serial-link manipulator with the morphologic design of biological snakes. The redundant robot can be used to assist medical experts in accessing internal organs with minimal or no invasion. Several snake-like robotic designs have been proposed for minimal invasive surgery, however, the few that were developed are yet to be fully explored for clinical procedures. This is due to lack of capability for full-fledged spatial navigation. In rare cases where such snake-like designs are spatially flexible, there exists no inverse kinematics (IK) solution with both precise control and fast response. In this study, we proposed a non-iterative geometric method for solving IK of lead-module of a snake-like robot designed for therapy or ablation of abdominal tumors. The proposed method is aimed at providing accurate and fast IK solution for given target points in the robot's workspace. n-1 virtual points (VPs) were geometrically computed and set as coordinates of intermediary joints in an n-link module. Suitable joint angles that can place the end-effector at given target points were then computed by vectorizing coordinates of the VPs, in addition to coordinates of the base point, target point, and tip of the first link in its default pose. The proposed method is applied to solve IK of two-link and redundant four-link modules. Both two-link and four-link modules were simulated with Robotics Toolbox in Matlab 8.3 (R2014a). Implementation result shows that the proposed method can solve IK of the spatially flexible robot with minimal error values. Furthermore, analyses of results from both modules show that the geometric method can reach 99.21 and 88.61% of points in their workspaces, respectively, with an error threshold of 1 mm. The proposed method is non-iterative and has a maximum execution time of 0.009 s. This paper focuses on solving IK problem of a spatially flexible robot which is part of a developmental project for abdominal surgery through minimal invasion or natural orifices. The study showed that the proposed geometric method can resolve IK of the snake-like robot with negligible error offset. Evaluation against well-known methods shows that the proposed method can reach several points in the robot's workspace with high accuracy and shorter computational time, simultaneously.
Curtis, Ashley F.; Branscombe-Caird, Laura M.; Comrie, Janna K.; Murtha, Susan J.E.
2018-01-01
Abstract Objectives:To investigate whether a commercially available brain training program is feasible to use with a middle-aged population and has a potential impact on cognition and emotional well-being (proof of concept). Method: Fourteen participants (ages 46–55) completed two 6-week training conditions using a crossover (counterbalanced) design: (1) experimental brain training condition and (2) active control “find answers to trivia questions online” condition. A comprehensive neurocognitive battery and a self-report measure of depression and anxiety were administered at baseline (first time point, before training) and after completing each training condition (second time point at 6 weeks, and third time point at 12 weeks). Cognitive composite scores were calculated for participants at each time point. Results: Study completion and protocol adherence demonstrated good feasibility of this brain training protocol in healthy middle-aged adults. Exploratory analyses suggested that brain training was associated with neurocognitive improvements related to executive attention, as well as improvements in mood. Conclusion: Overall, our findings suggest that brain training programs are feasible in middle-aged cohorts. We propose that brain training games may be linked to improvements in executive attention and affect by promoting cognitive self-efficacy in middle-aged adults. PMID:29189046
Tcl as a Software Environment for a TCS
NASA Astrophysics Data System (ADS)
Terrett, David L.
2002-12-01
This paper describes how the Tcl scripting language and C API has been used as the software environment for a telescope pointing kernel so that new pointing algorithms and software architectures can be developed and tested without needing a real-time operating system or real-time software environment. It has enabled development to continue outside the framework of a specific telescope project while continuing to build a system that is sufficiently complete to be capable of controlling real hardware but expending minimum effort on replacing the services that would normally by provided by a real-time software environment. Tcl is used as a scripting language for configuring the system at startup and then as the command interface for controlling the running system; the Tcl C language API is used to provided a system independent interface to file and socket I/O and other operating system services. The pointing algorithms themselves are implemented as a set of C++ objects calling C library functions that implement the algorithms described in [2]. Although originally designed as a test and development environment, the system, running as a soft real-time process on Linux, has been used to test the SOAR mount control system and will be used as the pointing kernel of the SOAR telescope control system
Strategy for Texture Management in Metals Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.
Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less
NASA Astrophysics Data System (ADS)
Higuita Cano, Mauricio; Mousli, Mohamed Islam Aniss; Kelouwani, Sousso; Agbossou, Kodjo; Hammoudi, Mhamed; Dubé, Yves
2017-03-01
This work investigates the design and validation of a fuel cell management system (FCMS) which can perform when the fuel cell is at water freezing temperature. This FCMS is based on a new tracking technique with intelligent prediction, which combined the Maximum Efficiency Point Tracking with variable perturbation-current step and the fuzzy logic technique (MEPT-FL). Unlike conventional fuel cell control systems, our proposed FCMS considers the cold-weather conditions, the reduction of fuel cell set-point oscillations. In addition, the FCMS is built to respond quickly and effectively to the variations of electric load. A temperature controller stage is designed in conjunction with the MEPT-FL in order to operate the FC at low-temperature values whilst tracking at the same time the maximum efficiency point. The simulation results have as well experimental validation suggest that propose approach is effective and can achieve an average efficiency improvement up to 8%. The MEPT-FL is validated using a Proton Exchange Membrane Fuel Cell (PEMFC) of 500 W.
The cislunar low-thrust trajectories via the libration point
NASA Astrophysics Data System (ADS)
Qu, Qingyu; Xu, Ming; Peng, Kun
2017-05-01
The low-thrust propulsion will be one of the most important propulsion in the future due to its large specific impulse. Different from traditional low-thrust trajectories (LTTs) yielded by some optimization algorithms, the gradient-based design methodology is investigated for LTTs in this paper with the help of invariant manifolds of LL1 point and Halo orbit near the LL1 point. Their deformations under solar gravitational perturbation are also presented to design LTTs in the restricted four-body model. The perturbed manifolds of LL1 point and its Halo orbit serve as the free-flight phase to reduce the fuel consumptions as much as possible. An open-loop control law is proposed, which is used to guide the spacecraft escaping from Earth or captured by Moon. By using a two-dimensional search strategy, the ON/OFF time of the low-thrust engine in the Earth-escaping and Moon-captured phases can be obtained. The numerical implementations show that the LTTs achieved in this paper are consistent with the one adopted by the SMART-1 mission.
Strategy for Texture Management in Metals Additive Manufacturing
Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...
2017-01-31
Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less
Re-Entry Point Targeting for LEO Spacecraft using Aerodynamic Drag
NASA Technical Reports Server (NTRS)
Omar, Sanny; Bevilacqua, Riccardo; Fineberg, Laurence; Treptow, Justin; Johnson, Yusef; Clark, Scott
2016-01-01
Most Low Earth Orbit (LEO) spacecraft do not have thrusters and re-enter atmosphere in random locations at uncertain times. Objects pose a risk to persons, property, or other satellites. Has become a larger concern with the recent increase in small satellites. Working on a NASA funded project to design a retractable drag device to expedite de-orbit and target a re-entry location through modulation of the drag area. Will be discussing the re-entry point targeting algorithm here.
An Investigation of Secondary Jetting Phenomena ’Hyperjet’ Shaped Charge.
1982-08-01
Hyperjet design. Primarily flash x ray techniques would be used to measure jet velocities resulting from secon- dary collisions. A formal report would be...the liner in time: a- X 2 (t-t1 ) 2 Ph __ Eqns. 2 Where a = Liner acceleration of some point in the liner Pcj = Chapman-Jouget explosive pressure Ps...liner. Since liner accerleration of any point along the liner is known, it is therefore possible to describe the governing equation of motion as: d2R = a
Burnstein, Bryan D.; Steele, Russell J.; Shrier, Ian
2011-01-01
Context: Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. Objective: To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Design: Cohort study. Setting: Eighteen different Cirque du Soleil shows. Patients or Other Participants: Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Intervention(s): Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. Main Outcome Measure(s): We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Results: Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Conclusions: Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators. PMID:22488138
On-board B-ISDN fast packet switching architectures. Phase 1: Study
NASA Technical Reports Server (NTRS)
Faris, Faris; Inukai, Thomas; Lee, Fred; Paul, Dilip; Shyy, Dong-Jye
1993-01-01
The broadband integrate services digital network (B-ISDN) is an emerging telecommunications technology that will meet most of the telecommunications networking needs in the mid-1990's to early next century. The satellite-based system is well positioned for providing B-ISDN service with its inherent capabilities of point-to-multipoint and broadcast transmission, virtually unlimited connectivity between any two points within a beam coverage, short deployment time of communications facility, flexible and dynamic reallocation of space segment capacity, and distance insensitive cost. On-board processing satellites, particularly in a multiple spot beam environment, will provide enhanced connectivity, better performance, optimized access and transmission link design, and lower user service cost. The following are described: the user and network aspects of broadband services; the current development status in broadband services; various satellite network architectures including system design issues; and various fast packet switch architectures and their detail designs.
NASA Astrophysics Data System (ADS)
Khaimovich, I. N.
2017-10-01
The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed forming generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity. As a result the author obtained the generic mathematical model of final die block in the form of three-dimensional array of base points. This model is the base for creation of engineering documentation of technological equipment and means of its control.
Design of Al-rich AlGaN quantum well structures for efficient UV emitters
NASA Astrophysics Data System (ADS)
Funato, Mitsuru; Ichikawa, Shuhei; Kumamoto, Kyosuke; Kawakami, Yoichi
2017-02-01
The effects of the structure design of AlGaN-based quantum wells (QWs) on the optical properties are discussed. We demonstrate that to achieve efficient emission in the germicidal wavelength range (250 - 280 nm), AlxGa1-xN QWs in an AlyGa1-yN matrix (x < y) is quite effective, compared with those in an AlN matrix: Time-resolved photoluminescence and cathodoluminescence spectroscopies show that the AlyGa1-yN matrix can enhance the radiative recombination process and can prevent misfit dislocations, which act as non-radiative recombination centers, from being induced in the QW interface. As a result, the emission intensity at room temperature is about 2.7 times larger for the AlxGa1-xN QW in the AlyGa1-yN matrix than that in the AlN matrix. We also point out that further reduction of point defects is crucial to achieve an even higher emission efficiency.
Digital computer program for generating dynamic turbofan engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.
1983-01-01
This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.
Research on low-latency MAC protocols for wireless sensor networks
NASA Astrophysics Data System (ADS)
He, Chenguang; Sha, Xuejun; Lee, Chankil
2007-11-01
Energy-efficient should not be the only design goal in MAC protocols for wireless sensor networks, which involve the use of battery-operated computing and sensing devices. Low-latency operation becomes the same important as energy-efficient in the case that the traffic load is very heavy or the real-time constrain is used in applications like tracking or locating. This paper introduces some causes of traditional time delays which are inherent in a multi-hops network using existing WSN MAC protocols, illuminates the importance of low-latency MAC design for wireless sensor networks, and presents three MACs as examples of low-latency protocols designed specially for sleep delay, wait delay and wakeup delay in wireless sensor networks, respectively. The paper also discusses design trade-offs with emphasis on low-latency and points out their advantages and disadvantages, together with some design considerations and suggestions for MAC protocols for future applications and researches.
Performance analysis of a dual-tree algorithm for computing spatial distance histograms
Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni
2011-01-01
Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
Hanjabam, Mandakini Devi; Kannaiyan, Sathish Kumar; Kamei, Gaihiamngam; Jakhar, Jitender Kumar; Chouksey, Mithlesh Kumar; Gudipati, Venkateshwarlu
2015-02-01
Physical properties of gelatin extracted from Unicorn leatherjacket (Aluterus monoceros) skin, which is generated as a waste from fish processing industries, were optimised using Response Surface Methodology (RSM). A Box-Behnken design was used to study the combined effects of three independent variables, namely phosphoric acid (H3PO4) concentration (0.15-0.25 M), extraction temperature (40-50 °C) and extraction time (4-12 h) on different responses like yield, gel strength and melting point of gelatin. The optimum conditions derived by RSM for the yield (10.58%) were 0.2 M H3PO4 for 9.01 h of extraction time and hot water extraction of 45.83 °C. The maximum achieved gel strength and melting point was 138.54 g and 22.61 °C respectively. Extraction time was found to be most influencing variable and had a positive coefficient on yield and negative coefficient on gel strength and melting point. The results indicated that Unicorn leatherjacket skins can be a source of gelatin having mild gel strength and melting point.
NASA Astrophysics Data System (ADS)
Hammond, Emily; Dilger, Samantha K. N.; Stoyles, Nicholas; Judisch, Alexandra; Morgan, John; Sieren, Jessica C.
2015-03-01
Recent growth of genetic disease models in swine has presented the opportunity to advance translation of developed imaging protocols, while characterizing the genotype to phenotype relationship. Repeated imaging with multiple clinical modalities provides non-invasive detection, diagnosis, and monitoring of disease to accomplish these goals; however, longitudinal scanning requires repeatable and reproducible positioning of the animals. A modular positioning unit was designed to provide a fixed, stable base for the anesthetized animal through transit and imaging. Post ventilation and sedation, animals were placed supine in the unit and monitored for consistent vitals. Comprehensive imaging was performed with a computed tomography (CT) chest-abdomen-pelvis scan at each screening time point. Longitudinal images were rigidly registered, accounting for rotation, translation, and anisotropic scaling, and the skeleton was isolated using a basic thresholding algorithm. Assessment of alignment was quantified via eleven pairs of corresponding points on the skeleton with the first time point as the reference. Results were obtained with five animals over five screening time points. The developed unit aided in skeletal alignment within an average of 13.13 +/- 6.7 mm for all five subjects providing a strong foundation for developing qualitative and quantitative methods of disease tracking.
Characterization and Design of Digital Pointing Subsystem for Optical Communication Demonstrator
NASA Technical Reports Server (NTRS)
Racho, C.; Portillo, A.
1998-01-01
The Optical Communications Demonstrator (OCD) is a laboratory-based lasercom demonstration terminal designed to validate several key technologies, including beacon acquisition, high bandwidth tracking, precision bearn pointing, and point-ahead compensation functions. It has been under active development over the past few years. The instrument uses a CCD array detector for both spatial acquisition and high-bandwidth tracking, and a fiber coupled laser transmitter. The array detector tracking concept provides wide field-of-view acquisition and permits effective platform jitter compensation and point-ahead control using only one steering mirror. This paper describes the detailed design and characterization of the digital control loop system which includes the Fast Steering Mirror (FSM), the CCD image tracker, and the associated electronics. The objective is to improve the overall system performance using laboratory measured data. The. design of the digital control loop is based on a linear time invariant open loop model. The closed loop performance is predicted using the theoretical model. With the digital filter programmed into the OCD control software, data is collected to verify the predictions. This paper presents the results of the, system modeling and performance analysis. It has been shown that measurement data closely matches theoretical predictions. An important part of the laser communication experiment is the ability of FSM to track the laser beacon within the. required tolerances. The pointing must be maintained to an accuracy that is much smaller than the transmit signal beamwidth. For an earth orbit distance, the system must be able to track the receiving station to within a few microradians. The failure. to do so will result in a severely degraded system performance.
The relation between visualization size, grouping, and user performance.
Gramazio, Connor C; Schloss, Karen B; Laidlaw, David H
2014-12-01
In this paper we make the following contributions: (1) we describe how the grouping, quantity, and size of visual marks affects search time based on the results from two experiments; (2) we report how search performance relates to self-reported difficulty in finding the target for different display types; and (3) we present design guidelines based on our findings to facilitate the design of effective visualizations. Both Experiment 1 and 2 asked participants to search for a unique target in colored visualizations to test how the grouping, quantity, and size of marks affects user performance. In Experiment 1, the target square was embedded in a grid of squares and in Experiment 2 the target was a point in a scatterplot. Search performance was faster when colors were spatially grouped than when they were randomly arranged. The quantity of marks had little effect on search time for grouped displays ("pop-out"), but increasing the quantity of marks slowed reaction time for random displays. Regardless of color layout (grouped vs. random), response times were slowest for the smallest mark size and decreased as mark size increased to a point, after which response times plateaued. In addition to these two experiments we also include potential application areas, as well as results from a small case study where we report preliminary findings that size may affect how users infer how visualizations should be used. We conclude with a list of design guidelines that focus on how to best create visualizations based on grouping, quantity, and size of visual marks.
Analysis and design of gain scheduled control systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Shamma, Jeff S.
1988-01-01
Gain scheduling, as an idea, is to construct a global feedback control system for a time varying and/or nonlinear plant from a collection of local time invariant designs. However in the absence of a sound analysis, these designs come with no guarantees on the robustness, performance, or even nominal stability of the overall gain schedule design. Such an analysis is presented for three types of gain scheduling situations: (1) a linear parameter varying plant scheduling on its exogenous parameters, (2) a nonlinear plant scheduling on a prescribed reference trajectory, and (3) a nonlinear plant scheduling on the current plant output. Conditions are given which guarantee that the stability, robustness, and performance properties of the fixed operating point designs carry over to the global gain scheduled designs, such as the scheduling variable should vary slowly and capture the plants nonlinearities. Finally, an alternate design framework is proposed which removes the slowing varying restriction or gain scheduled systems. This framework addresses some fundamental feedback issues previously ignored in standard gain.
Blade Pressure Distribution for a Moderately Loaded Propeller.
1980-09-01
lifting surface, ft 2 s chordwise location as fraction of chord length t time , sec t maximum thickness of blade, ft0 U free stream velocity, ft/sec (design...developed in Reference 1, it takes into account the quadratic form of the Bernoulli equation, since the pertubation velocities are some- times of the...normal derivatives at the loading and control point, respectively. It should be noted that the time factor has been eliminated from both sides of Eq. (3
Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng
2016-01-01
Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840
Ta, Van M; Juon, Hee-Soon; Gielen, Andrea C; Steinwachs, Donald; McFarlane, Elizabeth; Duggan, Anne
2009-02-01
This longitudinal study examined racial differences in depressive symptoms at three time points among Asian, Native Hawaiian/Other Pacific Islander (NHOPI) and white mothers at-risk for child maltreatment (n = 616). The proportion of mothers with depressive symptoms ranged from 28 to 35% at all time points. Adjusted analyses revealed that Asian and NHOPI mothers were significantly more likely than white mothers to have depressive symptoms but this disparity was present only among families at mild/moderate risk for child maltreatment. Future research should identify ways to reduce this disparity and involve the Asian and NHOPI communities in prevention and treatment program design and implementation.
High Temperature Gas-Cooled Test Reactor Point Design: Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-01-01
A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.
High Temperature Gas-Cooled Test Reactor Point Design: Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-03-01
A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
Runtime verification of embedded real-time systems.
Reinbacher, Thomas; Függer, Matthias; Brauer, Jörg
We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.
NASA Astrophysics Data System (ADS)
Ackerman, T. R.; Pizzuto, J. E.
2016-12-01
Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.
Assessment of Li/SOCL2 Battery Technology; Reserve, Thin-Cell Design. Volume 3
1990-06-01
power density and efficiency of an operating electrochemical system . The method is general - the examples to illustrate the selected points pertain to... System : Design, Manufacturing and QC Considerations), S. Szpak, P. A. Mosier-Boss, and J. J. Smith, 34th International Power Sources Symposium, Cherry...I) the computer time required to evaluate the integral in Eqn. Ill, and (iii the lack of generality in the attainable lineshapes. However, since this
Evolution of the randomized controlled trial in oncology over three decades.
Booth, Christopher M; Cescon, David W; Wang, Lisa; Tannock, Ian F; Krzyzanowska, Monika K
2008-11-20
The randomized controlled trial (RCT) is the gold standard for establishing new therapies in clinical oncology. Here we document changes with time in design, sponsorship, and outcomes of oncology RCTs. Reports of RCTs evaluating systemic therapy for breast, colorectal (CRC), and non-small-cell lung cancer (NSCLC) published 1975 to 2004 in six major journals were reviewed. Two authors abstracted data regarding trial design, results, and conclusions. Conclusions of authors were graded using a 7-point Likert scale. For each study the effect size for the primary end point was converted to a summary measure. A total of 321 eligible RCTs were included (48% breast, 24% CRC, 28% NSCLC). Over time, the number and size of RCTs increased considerably. For-profit/mixed sponsorship increased substantially during the study period (4% to 57%; P < .001). There was increasing use of time-to-event measures (39% to 78%) and decreasing use of response rate (54% to 14%) as primary end point (P < .001). Effect size remained stable over the study period. Authors have become more likely to strongly endorse the experimental arm (P = .017). A significant P value for the primary end point and industry sponsorship were each independently associated with endorsement of the experimental agent (odds ratio [OR] = 19.6, 95% CI, 8.9 to 43.1, and OR = 3.5, 95% CI, 1.6 to 7.5, respectively). RCTs in oncology have become larger and are more likely to be sponsored by industry. Authors of modern RCTs are more likely to strongly endorse novel therapies. For-profit sponsorship and statistically significant results are independently associated with endorsement of the experimental arm.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2005-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2004-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Design of diversity and focused combinatorial libraries in drug discovery.
Young, S Stanley; Ge, Nanxiang
2004-05-01
Using well-characterized chemical reactions and readily available monomers, chemists are able to create sets of compounds, termed libraries, which are useful in drug discovery processes. The design of combinatorial chemical libraries can be complex and there has been much information recently published offering suggestions on how the design process can be carried out. This review focuses on literature with the goal of organizing current thinking. At this point in time, it is clear that benchmarking of current suggested methods is required as opposed to further new methods.
Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long
2018-01-01
With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637
A passive pendulum wobble damper for a low spin rate Jupiter flyby spacecraft
NASA Technical Reports Server (NTRS)
Fowler, R. C.
1972-01-01
When the spacecraft has a low spin rate and precise pointing requirements, the wobble angle must be damped in a time period equivalent to a very few wobble cycles. The design, analysis, and test of a passive pendulum wobble damper are described.
Evaluation of crumb rubber in hot mix asphalt.
DOT National Transportation Integrated Search
2004-07-01
An asphalt-rubber hot mix asphalt (AR-HMA) design was created using a Superpave 12.5mm gradation and a #30 (-) mesh : crumb rubber at 20% total weight of the asphalt binder. At this point in time, asphalt rubber has only been used with HMA : that con...
Executive Cognitive Function and Food Intake in Children
ERIC Educational Resources Information Center
Riggs, Nathaniel R.; Spruijt-Metz, Donna; Sakuma, Kari-Lyn; Chou, Chih-Ping; Pentz, Mary Ann
2010-01-01
Objective: The current study investigated relations among neurocognitive skills important for behavioral regulation, and the intake of fruit, vegetables, and snack food in children. Design: Participants completed surveys at a single time point. Setting: Assessments took place during school. Participants: Participants were 107 fourth-grade children…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Shumin; Tian Hongwei; Pei Yanhui
A novel hedgehog-like core/shell structure, consisting of a high density of vertically aligned graphene sheets and a thin graphene shell/a copper core (VGs-GS/CC), has been synthesized via a simple one-step synthesis route using radio-frequency plasma-enhanced chemical vapor deposition (RF-PECVD). Scanning and transmission electron microscopy investigations show that the morphology of this core/shell material could be controlled by deposition time. For a short deposition time, only multilayer graphene shell tightly surrounds the copper particle, while as the deposition time is relative long, graphene sheets extend from the surface of GS/CC. The GS can protect CC particles from oxidation. The growth mechanismmore » for the obtained GS/CC and VGs-GS/CC has been revealed. Compared to VGs, VGs-GS/CC material exhibits a better electron field emission property. This investigation opens a possibility for designing a core/shell structure of different carbon-metal hybrid materials for a wide variety of practical applications. - Graphical abstract: With increasing deposition time, graphene sheets extend from the surface of GS/CC, causing the multilayer graphene encapsulated copper to be converted into vertically aligned graphene sheets-graphene shell/copper core structure. Highlights: Black-Right-Pointing-Pointer A novel hedgehog-like core/shell structure has been synthesized. Black-Right-Pointing-Pointer The structure consists of vertical graphene sheets-graphene shell and copper core. Black-Right-Pointing-Pointer The morphology of VGs-GS/CC can be controlled by choosing a proper deposition time. Black-Right-Pointing-Pointer With increasing deposition time, graphene sheets extend from the surface of GS/CC. Black-Right-Pointing-Pointer VGs-GS/CC exhibits a better electron field emission property as compared with VGs.« less
Gurtner, Sebastian
2013-04-01
Research and practical guidelines have many implications for how to structure a health economic study. A major focus in recent decades has been the quality of health economic research. In practice, the factors influencing a study design are not limited to the quest for quality. Moreover, the framework of the study is important. This research addresses three major questions related to these framework aspects. First, we want to know whether the design of health economic studies has changed over time. Second, we want to know how the subject of a study, whether it is a process or product innovation, influences the parameters of the study design. Third, one of the most important questions we will answer is whether and how the study's source of funding has an impact on the design of the research. To answer these questions, a total of 234 health economic studies were analyzed using a correspondence analysis and a logistic regression analysis. All three categories of framework factors have an influence on the aspects of the study design. Health economic studies have evolved over time, leading to the use of more advanced methods like complex sensitivity analyses. Additionally, the patient's point of view has increased in importance. The evaluation of product innovations has focused more on utility concepts. On the other hand, the source of funding may influence only a few aspects of the study design, such as the use of evaluation methods, the source of data, and the use of certain utility measures. The most important trends in health care, such as the emphasis on the patients' point of view, become increasingly established in health economic evaluations with the passage of time. Although methodological challenges remain, modern information and communication technologies provide a basis for increasing the complexity and quality of health economic studies if used frequently.
Space Shuttle Proximity Operation Sensor Study
NASA Technical Reports Server (NTRS)
Weber, C. L.; Alem, W. K.
1978-01-01
The performance of the Kuband radar was analyzed in detail, and the performance was updated and summarized. In so doing, two different radar design philosophies were described, and the corresponding differences in losses were enumerated. The resulting design margins were determined for both design philosophies and for both the designated and nondesignated range modes of operation. In some cases, the design margin was about zero, and in other cases it was significantly less than zero. With the point of view described above, the recommended solution is to allow more scan time but at the present scan rate. With no other changes in the present configuration, the radar met design detection specifications for all design philosophies at a range of 11.3 nautical miles.
Airspace Designations and Reporting Points (1997)
DOT National Transportation Integrated Search
1997-09-10
This order, published yearly, provides a listing of all airspace designations : and reporting points, and pending amendments to those designations and reporting : points, established by the Federal Aviation Administration (FAA) under the : authority ...
Cross-modal decoupling in temporal attention.
Mühlberg, Stefanie; Oriolo, Giovanni; Soto-Faraco, Salvador
2014-06-01
Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross-modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non-attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross-modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Development of a Standard Platinum Resistance Thermometer for Use up to the Copper Point
NASA Astrophysics Data System (ADS)
Tavener, J. P.
2015-08-01
The international temperature scale of 1990 defines temperatures in the range from 13.8 K to 1234.93 K () using a standard platinum resistance thermometer (SPRT) as an interpolating instrument. For temperatures above , the current designs of an SPRT require extreme care to avoid contamination, especially by metallic impurities, which can cause rapid and irreversible drift. This study investigates the performance of a new design of a high-temperature SPRT with the aim of improving the stability of the SPRTs and extending their temperature range. The prototype SPRTs have an alumina sheath, a sapphire support for the sensing element, which are aspirated with dry air and operated with a dc bias voltage to suppress the diffusion of metal-ion contaminants. Three prototype thermometers were exposed to temperatures near or above the copper freezing point, , for total exposure times in excess of 500 h and exhibited drifts in the triple-point resistance of less than 10 mK. The new design eliminates some of the problems associated with fused-silica sheaths and sensor-support structures and is a viable option for a high-accuracy thermometer for temperatures approaching.
In-flight results of adaptive attitude control law for a microsatellite
NASA Astrophysics Data System (ADS)
Pittet, C.; Luzi, A. R.; Peaucelle, D.; Biannic, J.-M.; Mignot, J.
2015-06-01
Because satellites usually do not experience large changes of mass, center of gravity or inertia in orbit, linear time invariant (LTI) controllers have been widely used to control their attitude. But, as the pointing requirements become more stringent and the satellite's structure more complex with large steerable and/or deployable appendices and flexible modes occurring in the control bandwidth, one unique LTI controller is no longer sufficient. One solution consists in designing several LTI controllers, one for each set point, but the switching between them is difficult to tune and validate. Another interesting solution is to use adaptive controllers, which could present at least two advantages: first, as the controller automatically and continuously adapts to the set point without changing the structure, no switching logic is needed in the software; second, performance and stability of the closed-loop system can be assessed directly on the whole flight domain. To evaluate the real benefits of adaptive control for satellites, in terms of design, validation and performances, CNES selected it as end-of-life experiment on PICARD microsatellite. This paper describes the design, validation and in-flight results of the new adaptive attitude control law, compared to nominal control law.
Asteroid Deflection Mission Design Considering On-Ground Risks
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter
The deflection of an Earth-threatening asteroid requires high transparency of the mission design process. The goal of such a mission is to move the projected point of impact over the face of Earth until the asteroid is on a miss trajectory. During the course of deflection operations, the projected point of impact will match regions that were less affected before alteration of the asteroid’s trajectory. These regions are at risk of sustaining considerable damage if the deflecting spacecraft becomes non-operational. The projected impact point would remain where the deflection mission put it at the time of mission failure. Hence, all regions that are potentially affected by the deflection campaign need to be informed about this risk and should be involved in the mission design process. A mission design compromise will have to be found that is acceptable to all affected parties (Schweickart, 2004). A software tool that assesses the on-ground risk due to deflection missions is under development. It will allow to study the accumulated on-ground risk along the path of the projected impact point. The tool will help determine a deflection mission design that minimizes the on-ground casualty and damage risk due to deflection operations. Currently, the tool is capable of simulating asteroid trajectories through the solar system and considers gravitational forces between solar system bodies. A virtual asteroid may be placed at an arbitrary point in the simulation for analysis and manipulation. Furthermore, the tool determines the asteroid’s point of impact and provides an estimate of the population at risk. Validation has been conducted against the solar system ephemeris catalogue HORIZONS by NASA’s Jet Propulsion Laboratory (JPL). Asteroids that are propagated over a period of 15 years show typical position discrepancies of 0.05 Earth radii relative to HORIZONS’ output. Ultimately, results from this research will aid in the identification of requirements for deflection missions that enable effective, minimum risk asteroid deflection. Schweickart, R. L. (2004). THE REAL DEFLECTION DILEMMA. In 2004 Planetary Defense Conference: Protecting Earth from Asteroids (pp. 1-6). Orange County, California. Retrieved from http://b612foundation.org/wp-content/uploads/2013/02/Real_Deflection_Dilemma.pdf
NASA Astrophysics Data System (ADS)
Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.
2017-09-01
In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.
A sled push stimulus potentiates subsequent 20-m sprint performance.
Seitz, Laurent B; Mina, Minas A; Haff, G Gregory
2017-08-01
The objective of this study was to examine the potentiating effects of performing a single sprint-style sled push on subsequent unresisted 20m sprint performance. Randomized crossover design. Following a familiarization session, twenty rugby league players performed maximal unresisted 20m sprints before and 15s, 4, 8 and 12min after a single sled push stimulus loaded with either 75 or 125% body mass. The two sled push conditions were performed in a randomized order over a one-week period. The fastest sprint time recorded before each sled push was compared to that recorded at each time point after to determine the post-activation potentiation (PAP) effect. After the 75% body mass sled push, sprint time was 0.26±1.03% slower at the 15s time point (effect size [ES]=0.07) but faster at the 4 (-0.95±2.00%; ES=-0.22), 8 (-1.80±1.43%; ES=-0.42) and 12 (-1.54±1.54%; ES=-0.36)min time points. Sprint time was slower at all the time points after the 125% body mass sled (1.36±2.36%-2.59±2.90%; ESs=0.34-0.64). Twenty-meter sprint performance is potentiated 4-12min following a sled push loaded with 75% body mass while it is impaired after a 125% body mass sled. These results are of great importance for coaches seeking to potentiate sprint performance with the sled push exercise. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
A field trial of ethyl hexanediol against Aedes dorsalis in Sonoma County, California.
Rutledge, L C; Hooper, R L; Wirtz, R A; Gupta, R K
1989-09-01
The repellent ethyl hexanediol (2-ethyl-1,3-hexanediol) was tested against the mosquito Aedes dorsalis in a coastal salt marsh in California. The experimental design incorporated a linear regression model, sequential treatments and a proportional end point (95%) for protection time. The protection time of 0.10 mg/cm2 ethyl hexanediol was estimated at 0.8 h. This time is shorter than that obtained previously for deet (N,N-diethyl-3-methylbenzamide) against Ae. dorsalis (4.4 h).
A point implicit time integration technique for slow transient flow problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less
NASA Astrophysics Data System (ADS)
Zavodszky, Maria I.; Sanschagrin, Paul C.; Kuhn, Leslie A.; Korde, Rajesh S.
2002-12-01
For the successful identification and docking of new ligands to a protein target by virtual screening, the essential features of the protein and ligand surfaces must be captured and distilled in an efficient representation. Since the running time for docking increases exponentially with the number of points representing the protein and each ligand candidate, it is important to place these points where the best interactions can be made between the protein and the ligand. This definition of favorable points of interaction can also guide protein structure-based ligand design, which typically focuses on which chemical groups provide the most energetically favorable contacts. In this paper, we present an alternative method of protein template and ligand interaction point design that identifies the most favorable points for making hydrophobic and hydrogen-bond interactions by using a knowledge base. The knowledge-based protein and ligand representations have been incorporated in version 2.0 of SLIDE and resulted in dockings closer to the crystal structure orientations when screening a set of 57 known thrombin and glutathione S-transferase (GST) ligands against the apo structures of these proteins. There was also improved scoring enrichment of the dockings, meaning better differentiation between the chemically diverse known ligands and a ˜15,000-molecule dataset of randomly-chosen small organic molecules. This approach for identifying the most important points of interaction between proteins and their ligands can equally well be used in other docking and design techniques. While much recent effort has focused on improving scoring functions for protein-ligand docking, our results indicate that improving the representation of the chemistry of proteins and their ligands is another avenue that can lead to significant improvements in the identification, docking, and scoring of ligands.
Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang
2013-01-01
Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.
NASA Astrophysics Data System (ADS)
Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos
2017-06-01
The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.
NEW APPROACHES TO ESTIMATION OF SOLID-WASTE QUANTITY AND COMPOSITION
Efficient and statistically sound sampling protocols for estimating the quantity and composition of solid waste over a stated period of time in a given location, such as a landfill site or at a specific point in an industrial or commercial process, are essential to the design ...
32 CFR 525.4 - Entry authorization (policy).
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Captains of ships and/or marine vessels planning to enter Kwajalein Missile Range shall not knowingly... when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and departure of airspace covered by...
32 CFR 525.4 - Entry authorization (policy).
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Captains of ships and/or marine vessels planning to enter Kwajalein Missile Range shall not knowingly... when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and departure of airspace covered by...
32 CFR 525.4 - Entry authorization (policy).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Captains of ships and/or marine vessels planning to enter Kwajalein Missile Range shall not knowingly... when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and departure of airspace covered by...
32 CFR 525.4 - Entry authorization (policy).
Code of Federal Regulations, 2014 CFR
2014-07-01
... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...
32 CFR 525.4 - Entry authorization (policy).
Code of Federal Regulations, 2013 CFR
2013-07-01
... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...
47 CFR 25.271 - Control of transmitting stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... station. (b) The licensee of a transmitting earth station licensed under this part shall ensure that a trained operator is present on the earth station site, or at a designated remote control point for the earth station, at all times that transmissions are being conducted. No operator's license is required...
47 CFR 25.271 - Control of transmitting stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... station. (b) The licensee of a transmitting earth station licensed under this part shall ensure that a trained operator is present on the earth station site, or at a designated remote control point for the earth station, at all times that transmissions are being conducted. No operator's license is required...
47 CFR 25.271 - Control of transmitting stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... station. (b) The licensee of a transmitting earth station licensed under this part shall ensure that a trained operator is present on the earth station site, or at a designated remote control point for the earth station, at all times that transmissions are being conducted. No operator's license is required...
47 CFR 25.271 - Control of transmitting stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... station. (b) The licensee of a transmitting earth station licensed under this part shall ensure that a trained operator is present on the earth station site, or at a designated remote control point for the earth station, at all times that transmissions are being conducted. No operator's license is required...
47 CFR 25.271 - Control of transmitting stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... station. (b) The licensee of a transmitting earth station licensed under this part shall ensure that a trained operator is present on the earth station site, or at a designated remote control point for the earth station, at all times that transmissions are being conducted. No operator's license is required...
In studies on the formation of disinfection byproducts (DBPs), it is necessary to scavenge residual active (oxidizing) chlorine in order to fix the chlorination byproducts (such as haloethanoates) at a point in time. Thus, methods designed for compliance monitoring are not alway...
Designing a Robot Teaching Assistant for Enhancing and Sustaining Learning Motivation
ERIC Educational Resources Information Center
Hung, I-Chun; Chao, Kuo-Jen; Lee, Ling; Chen, Nian-Shing
2013-01-01
Although many researchers have pointed out that educational robots can stimulate learners' learning motivation, the learning motivation will be hardly sustained and rapidly decreased over time if the amusement and interaction merely come from the new technology itself without incorporating instructional strategies. Many researchers have…
Triana Safehold: A New Gyroless, Sun-Pointing Attitude Controller
NASA Technical Reports Server (NTRS)
Chen, J.; Morgenstern, Wendy; Garrick, Joseph
2001-01-01
Triana is a single-string spacecraft to be placed in a halo orbit about the sun-earth Ll Lagrangian point. The Attitude Control Subsystem (ACS) hardware includes four reaction wheels, ten thrusters, six coarse sun sensors, a star tracker, and a three-axis Inertial Measuring Unit (IMU). The ACS Safehold design features a gyroless sun-pointing control scheme using only sun sensors and wheels. With this minimum hardware approach, Safehold increases mission reliability in the event of a gyroscope anomaly. In place of the gyroscope rate measurements, Triana Safehold uses wheel tachometers to help provide a scaled estimation of the spacecraft body rate about the sun vector. Since Triana nominally performs momentum management every three months, its accumulated system momentum can reach a significant fraction of the wheel capacity. It is therefore a requirement for Safehold to maintain a sun-pointing attitude even when the spacecraft system momentum is reasonably large. The tachometer sun-line rate estimation enables the controller to bring the spacecraft close to its desired sun-pointing attitude even with reasonably high system momentum and wheel drags. This paper presents the design rationale behind this gyroless controller, stability analysis, and some time-domain simulation results showing performances with various initial conditions. Finally, suggestions for future improvements are briefly discussed.
NASA Astrophysics Data System (ADS)
Zeng, Hao; Zhang, Jingrui
2018-04-01
The low-thrust version of the fuel-optimal transfers between periodic orbits with different energies in the vicinity of five libration points is exploited deeply in the Circular Restricted Three-Body Problem. Indirect optimization technique incorporated with constraint gradients is employed to further improve the computational efficiency and accuracy of the algorithm. The required optimal thrust magnitude and direction can be determined to create the bridging trajectory that connects the invariant manifolds. A hierarchical design strategy dividing the constraint set is proposed to seek the optimal solution when the problem cannot be solved directly. Meanwhile, the solution procedure and the value ranges of used variables are summarized. To highlight the effectivity of the transfer scheme and aim at different types of libration point orbits, transfer trajectories between some sample orbits, including Lyapunov orbits, planar orbits, halo orbits, axial orbits, vertical orbits and butterfly orbits for collinear and triangular libration points, are investigated with various time of flight. Numerical results show that the fuel consumption varies from a few kilograms to tens of kilograms, related to the locations and the types of mission orbits as well as the corresponding invariant manifold structures, and indicates that the low-thrust transfers may be a beneficial option for the extended science missions around different libration points.
Urinary Tract Infection Antibiotic Trial Study Design: A Systematic Review.
Basmaci, Romain; Vazouras, Konstantinos; Bielicki, Julia; Folgori, Laura; Hsia, Yingfen; Zaoutis, Theoklis; Sharland, Mike
2017-12-01
Urinary tract infections (UTIs) represent common bacterial infections in children. No guidance on the conduct of pediatric febrile UTI clinical trials (CTs) exist. To assess the criteria used for patient selection and the efficacy end points in febrile pediatric UTI CTs. Medline, Embase, Cochrane central databases, and clinicaltrials.gov were searched between January 1, 1990, and November 24, 2016. We combined Medical Subject Headings terms and free-text terms for "urinary tract infections" and "therapeutics" and "clinical trials" in children (0-18 years), identifying 3086 articles. Two independent reviewers assessed study quality and performed data extraction. We included 40 CTs in which a total of 4381 cases of pediatric UTIs were investigated. Positive urine culture results and fever were the most common inclusion criteria (93% and 78%, respectively). Urine sampling method, pyuria, and colony thresholds were highly variable. Clinical and microbiological end points were assessed in 88% and 93% of the studies, respectively. Timing for end point assessment was highly variable, and only 3 studies (17%) out of the 18 performed after the Food and Drug Administration 1998 guidance publication assessed primary and secondary end points consistently with this guidance. Our limitations included a mixed population of healthy children and children with an underlying condition. In 6 trials, researchers studied a subgroup of patients with afebrile UTI. We observed a wide variability in the microbiological inclusion criteria and the timing for end point assessment. The available guidance for adults appear not to be used by pediatricians and do not seem applicable to the childhood UTI. A harmonized design for pediatric UTIs CT is necessary. Copyright © 2017 by the American Academy of Pediatrics.
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
A high speed buffer for LV data acquisition
NASA Technical Reports Server (NTRS)
Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.
1987-01-01
The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.
A conceptual ground-water-quality monitoring network for San Fernando Valley, California
Setmire, J.G.
1985-01-01
A conceptual groundwater-quality monitoring network was developed for San Fernando Valley to provide the California State Water Resources Control Board with an integrated, basinwide control system to monitor the quality of groundwater. The geology, occurrence and movement of groundwater, land use, background water quality, and potential sources of pollution were described and then considered in designing the conceptual monitoring network. The network was designed to monitor major known and potential point and nonpoint sources of groundwater contamination over time. The network is composed of 291 sites where wells are needed to define the groundwater quality. The ideal network includes four specific-purpose networks to monitor (1) ambient water quality, (2) nonpoint sources of pollution, (3) point sources of pollution, and (4) line sources of pollution. (USGS)
System perspectives for mobile platform design in m-Health
NASA Astrophysics Data System (ADS)
Roveda, Janet M.; Fink, Wolfgang
2016-05-01
Advances in integrated circuit technologies have led to the integration of medical sensor front ends with data processing circuits, i.e., mobile platform design for wearable sensors. We discuss design methodologies for wearable sensor nodes and their applications in m-Health. From the user perspective, flexibility, comfort, appearance, fashion, ease-of-use, and visibility are key form factors. From the technology development point of view, high accuracy, low power consumption, and high signal to noise ratio are desirable features. From the embedded software design standpoint, real time data analysis algorithms, application and database interfaces are the critical components to create successful wearable sensor-based products.
The digital implementation of control compensators: The coefficient wordlength issue
NASA Technical Reports Server (NTRS)
Moroney, P.; Willsky, A. S.; Houpt, P. K.
1979-01-01
There exists a number of mathematical procedures for designing discrete-time compensators. However, the digital implementation of these designs, with a microprocessor for example, has not received nearly as thorough an investigation. The finite-precision nature of the digital hardware makes it necessary to choose an algorithm (computational structure) that will perform 'well-enough' with regard to the initial objectives of the design. This paper describes a procedure for estimating the required fixed-point coefficient wordlength for any given computational structure for the implementation of a single-input single-output LOG design. The results are compared to the actual number of bits necessary to achieve a specified performance index.
Exploration Spacecraft and Space Suit Internal Atmosphere Pressure and Composition
NASA Technical Reports Server (NTRS)
Lange, Kevin; Duffield, Bruce; Jeng, Frank; Campbell, Paul
2005-01-01
The design of habitat atmospheres for future space missions is heavily driven by physiological and safety requirements. Lower EVA prebreathe time and reduced risk of decompression sickness must be balanced against the increased risk of fire and higher cost and mass of materials associated with higher oxygen concentrations. Any proposed increase in space suit pressure must consider impacts on space suit mass and mobility. Future spacecraft designs will likely incorporate more composite and polymeric materials both to reduce structural mass and to optimize crew radiation protection. Narrowed atmosphere design spaces have been identified that can be used as starting points for more detailed design studies and risk assessments.
Application of two procedures for dual-point design of transonic airfoils
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Campbell, Richard L.; Allison, Dennis O.
1994-01-01
Two dual-point design procedures were developed to reduce the objective function of a baseline airfoil at two design points. The first procedure to develop a redesigned airfoil used a weighted average of the shapes of two intermediate airfoils redesigned at each of the two design points. The second procedure used a weighted average of two pressure distributions obtained from an intermediate airfoil redesigned at each of the two design points. Each procedure was used to design a new airfoil with reduced wave drag at the cruise condition without increasing the wave drag or pitching moment at the climb condition. Two cycles of the airfoil shape-averaging procedure successfully designed a new airfoil that reduced the objective function and satisfied the constraints. One cycle of the target (desired) pressure-averaging procedure was used to design two new airfoils that reduced the objective function and came close to satisfying the constraints.
On the effective point of measurement in megavoltage photon beams.
Kawrakow, Iwan
2006-06-01
This paper presents a numerical investigation of the effective point of measurement of thimble ionization chambers in megavoltage photon beams using Monte Carlo simulations with the EGSNRC system. It is shown that the effective point of measurement for relative photon beam dosimetry depends on every detail of the chamber design, including the cavity length, the mass density of the wall material, and the size of the central electrode, in addition to the cavity radius. Moreover, the effective point of measurement also depends on the beam quality and the field size. The paper therefore argues that the upstream shift of 0.6 times the cavity radius, recommended in current dosimetry protocols, is inadequate for accurate relative photon beam dosimetry, particularly in the build-up region. On the other hand, once the effective point of measurement is selected appropriately, measured depth-ionization curves can be equated to measured depth-dose curves for all depths within +/- 0.5%.
Development of a Hard X-ray Beam Position Monitor for Insertion Device Beams at the APS
NASA Astrophysics Data System (ADS)
Decker, Glenn; Rosenbaum, Gerd; Singh, Om
2006-11-01
Long-term pointing stability requirements at the Advanced Photon Source (APS) are very stringent, at the level of 500 nanoradians peak-to-peak or better over a one-week time frame. Conventional rf beam position monitors (BPMs) close to the insertion device source points are incapable of assuring this level of stability, owing to mechanical, thermal, and electronic stability limitations. Insertion device gap-dependent systematic errors associated with the present ultraviolet photon beam position monitors similarly limit their ability to control long-term pointing stability. We report on the development of a new BPM design sensitive only to hard x-rays. Early experimental results will be presented.
Designing Interactive Multimedia Instruction to Address Soldiers’ Learning Needs
2014-12-01
A point of need design seeks to identify and meet specific learning needs. It does so by focusing on the learning needs of an identified group ...instructional design and tailored training techniques to address the Army Learning Model (ALM) point of need concept. The point of need concept focuses both on ...developing six IMI exemplars focused on point of need training, including three variations of needs-focused designs : familiarization, core, and tailored
48 CFR 52.247-44 - F.o.b. Designated Air Carrier's Terminal, Point of Importation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false F.o.b. Designated Air... CLAUSES Text of Provisions and Clauses 52.247-44 F.o.b. Designated Air Carrier's Terminal, Point of... the delivery term is f.o.b. designated air carrier's terminal, point of importation: F.o.b. Designated...
48 CFR 52.247-43 - F.o.b. Designated Air Carrier's Terminal, Point of Exportation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false F.o.b. Designated Air... CLAUSES Text of Provisions and Clauses 52.247-43 F.o.b. Designated Air Carrier's Terminal, Point of... the delivery term is f.o.b. designated air carrier's terminal, point of exportation: F.o.b. Designated...
Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-01
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang
2015-07-15
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less
Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing
NASA Astrophysics Data System (ADS)
McCaffrey, Nathaniel J.; Pantuso, Francis P.
1998-03-01
A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments
Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia
2016-01-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
Hydraulic fracturing stress measurement in underground salt rock mines at Upper Kama Deposit
NASA Astrophysics Data System (ADS)
Rubtsova, EV; Skulkin, AA
2018-03-01
The paper reports the experimental results on hydraulic fracturing in-situ stress measurements in potash mines of Uralkali. The selected HF procedure, as well as locations and designs of measuring points are substantiated. From the evidence of 78 HF stress measurement tests at eight measuring points, it has been found that the in-situ stress field is nonequicomponent, with the vertical stresses having value close to the estimates obtained with respect to the overlying rock weight while the horizontal stresses exceed the gravity stresses by 2–3 times.
Simulator study of a pictorial display for general aviation instrument flight
NASA Technical Reports Server (NTRS)
Adams, J. J.
1982-01-01
A simulation study of a computer drawn pictorial display involved a flight task that included an en route segment, terminal area maneuvering, a final approach, a missed approach, and a hold. The pictorial display consists of the drawing of boxes which either move along the desired path or are fixed at designated way points. Two boxes may be shown at all times, one related to the active way point and the other related to the standby way point. Ground tracks and vertical profiles of the flights, time histories of the final approach, and comments were obtained from time pilots. The results demonstrate the accuracy and consistency with which the segments of the flight are executed. The pilots found that the display is easy to learn and to use; that it provides good situation awareness, and that it could improve the safety of flight. The small size of the display, the lack of numerical information on pitch, roll, and heading angles, and the lack of definition of the boundaries of the conventional glide slope and localizer areas were criticized.
System Synthesis in Preliminary Aircraft Design using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).
NASA Astrophysics Data System (ADS)
Sardina, V.
2017-12-01
The Pacific Tsunami Warning Center's round the clock operations rely on the rapid determination of the source parameters of earthquakes occurring around the world. To rapidly estimate source parameters such as earthquake location and magnitude the PTWC analyzes data streams ingested in near-real time from a global network of more than 700 seismic stations. Both the density of this network and the data latency of its member stations at any given time have a direct impact on the speed at which the PTWC scientists on duty can locate an earthquake and estimate its magnitude. In this context, it turns operationally advantageous to have the ability of assessing how quickly the PTWC operational system can reasonably detect and locate and earthquake, estimate its magnitude, and send the corresponding tsunami message whenever appropriate. For this purpose, we designed and implemented a multithreaded C++ software package to generate detection time grids for both P- and S-waves after taking into consideration the seismic network topology and the data latency of its member stations. We first encapsulate all the parameters of interest at a given geographic point, such as geographic coordinates, P- and S-waves detection time in at least a minimum number of stations, and maximum allowed azimuth gap into a DetectionTimePoint class. Then we apply composition and inheritance to define a DetectionTimeLine class that handles a vector of DetectionTimePoint objects along a given latitude. A DetectionTimesGrid class in turn handles the dynamic allocation of new TravelTimeLine objects and assigning the calculation of the corresponding P- and S-waves' detection times to new threads. Finally, we added a GUI that allows the user to interactively set all initial calculation parameters and output options. Initial testing in an eight core system shows that generation of a global 2D grid at 1 degree resolution setting detection on at least 5 stations and no azimuth gap restriction takes under 25 seconds. Under the same initial conditions, generation of a 2D grid at 0.1 degree resolution (2.6 million grid points) takes no more than 22 minutes. This preliminary results show a significant gain in grid generation speed when compared to other implementation via either scripts, or previous versions of the C++ code that did not implement multithreading.
Time-varying Entry Heating Profile Replication with a Rotating Arc Jet Test Article
NASA Technical Reports Server (NTRS)
Grinstead, Jay Henderson; Venkatapathy, Ethiraj; Noyes, Eric A.; Mach, Jeffrey J.; Empey, Daniel M.; White, Todd R.
2014-01-01
A new approach for arc jet testing of thermal protection materials at conditions approximating the time-varying conditions of atmospheric entry was developed and demonstrated. The approach relies upon the spatial variation of heat flux and pressure over a cylindrical test model. By slowly rotating a cylindrical arc jet test model during exposure to an arc jet stream, each point on the test model will experience constantly changing applied heat flux. The predicted temporal profile of heat flux at a point on a vehicle can be replicated by rotating the cylinder at a prescribed speed and direction. An electromechanical test model mechanism was designed, built, and operated during an arc jet test to demonstrate the technique.
A novel dynamic sensing of wearable digital textile sensor with body motion analysis.
Yang, Chang-Ming; Lin, Zhan-Sheng; Hu, Chang-Lin; Chen, Yu-Shih; Ke, Ling-Yi; Chen, Yin-Rui
2010-01-01
This work proposes an innovative textile sensor system to monitor dynamic body movement and human posture by attaching wearable digital sensors to analyze body motion. The proposed system can display and analyze signals when individuals are walking, running, veering around, walking up and down stairs, as well as falling down with a wearable monitoring system, which reacts to the coordination between the body and feet. Several digital sensor designs are embedded in clothing and wear apparel. Any pressure point can determine which activity is underway. Importantly, wearable digital sensors and a wearable monitoring system allow adaptive, real-time postures, real time velocity, acceleration, non-invasive, transmission healthcare, and point of care (POC) for home and non-clinical environments.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Constructal vascularized structures
NASA Astrophysics Data System (ADS)
Cetkin, Erdal
2015-06-01
Smart features such as self-healing and selfcooling require bathing the entire volume with a coolant or/and healing agent. Bathing the entire volume is an example of point to area (or volume) flows. Point to area flows cover all the distributing and collecting kinds of flows, i.e. inhaling and exhaling, mining, river deltas, energy distribution, distribution of products on the landscape and so on. The flow resistances of a point to area flow can be decreased by changing the design with the guidance of the constructal law, which is the law of the design evolution in time. In this paper, how the flow resistances (heat, fluid and stress) can be decreased by using the constructal law is shown with examples. First, the validity of two assumptions is surveyed: using temperature independent Hess-Murray rule and using constant diameter ducts where the duct discharges fluid along its edge. Then, point to area types of flows are explained by illustrating the results of two examples: fluid networks and heating an area. Last, how the structures should be vascularized for cooling and mechanical strength is documented. This paper shows that flow resistances can be decreased by morphing the shape freely without any restrictions or generic algorithms.
Development of a Multi-Point Microwave Interferometry (MPMI) Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specht, Paul Elliott; Cooper, Marcia A.; Jilek, Brook Anton
2015-09-01
A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of themore » MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.« less
An Evaluation of Spacecraft Pointing Requirements for Optically Linked Satellite Systems
NASA Astrophysics Data System (ADS)
Gunter, B. C.; Dahl, T.
2017-12-01
Free space optical (laser) communications can offer certain advantages for many remote sensing applications, due primarily to the high data rates (Gb/s) and energy efficiences possible from such systems. An orbiting network of crosslinked satellites could potentially relay imagery and other high-volume data at near real-time intervals. To achieve this would require satellites actively tracking one or more satellites, as well as ground terminals. The narrow laser beam width utilized by the transmitting satellites pose technical challenges due to the higher pointing accuracy required for effective signal transmission, in particular if small satellites are involved. To better understand what it would take to realize such a small-satellite laser communication network, this study investigates the pointing requirements needed to support optical data links. A general method for characterizing pointing tolerance, angle rates and accelerations for line of site vectors is devised and applied to various case studies. Comparisons with state-of-the-art small satellite attitude control systems are also made to assess what is possible using current technology. The results help refine the trade space for designs for optically linked networks from the hardware aboard each satellite to the design of the satellite constellation itself.
Robust modular product family design
NASA Astrophysics Data System (ADS)
Jiang, Lan; Allada, Venkat
2001-10-01
This paper presents a modified Taguchi methodology to improve the robustness of modular product families against changes in customer requirements. The general research questions posed in this paper are: (1) How to effectively design a product family (PF) that is robust enough to accommodate future customer requirements. (2) How far into the future should designers look to design a robust product family? An example of a simplified vacuum product family is used to illustrate our methodology. In the example, customer requirements are selected as signal factors; future changes of customer requirements are selected as noise factors; an index called quality characteristic (QC) is set to evaluate the product vacuum family; and the module instance matrix (M) is selected as control factor. Initially a relation between the objective function (QC) and the control factor (M) is established, and then the feasible M space is systemically explored using a simplex method to determine the optimum M and the corresponding QC values. Next, various noise levels at different time points are introduced into the system. For each noise level, the optimal values of M and QC are computed and plotted on a QC-chart. The tunable time period of the control factor (the module matrix, M) is computed using the QC-chart. The tunable time period represents the maximum time for which a given control factor can be used to satisfy current and future customer needs. Finally, a robustness index is used to break up the tunable time period into suitable time periods that designers should consider while designing product families.
Shoemaker, Ritchie C; House, Dennis E
2005-01-01
The human health risk for chronic illnesses involving multiple body systems following inhalation exposure to the indoor environments of water-damaged buildings (WDBs) has remained poorly characterized and the subject of intense controversy. The current study assessed the hypothesis that exposure to the indoor environments of WDBs with visible microbial colonization was associated with illness. The study used a cross-sectional design with assessments at five time points, and the interventions of cholestyramine (CSM) therapy, exposure avoidance following therapy, and reexposure to the buildings after illness resolution. The methodological approach included oral administration of questionnaires, medical examinations, laboratory analyses, pulmonary function testing, and measurements of visual function. Of the 21 study volunteers, 19 completed assessment at each of the five time points. Data at Time Point 1 indicated multiple symptoms involving at least four organ systems in all study participants, a restrictive respiratory condition in four participants, and abnormally low visual contrast sensitivity (VCS) in 18 participants. Serum leptin levels were abnormally high and alpha melanocyte stimulating hormone (MSH) levels were abnormally low. Assessments at Time Point 2, following 2 weeks of CSM therapy, indicated a highly significant improvement in health status. Improvement was maintained at Time Point 3, which followed exposure avoidance without therapy. Reexposure to the WDBs resulted in illness reacquisition in all participants within 1 to 7 days. Following another round of CSM therapy, assessments at Time Point 5 indicated a highly significant improvement in health status. The group-mean number of symptoms decreased from 14.9+/-0.8 S.E.M. at Time Point 1 to 1.2+/-0.3 S.E.M., and the VCS deficit of approximately 50% at Time Point 1 was fully resolved. Leptin and MSH levels showed statistically significant improvement. The results indicated that CSM was an effective therapeutic agent, that VCS was a sensitive and specific indicator of neurologic function, and that illness involved systemic and hypothalamic processes. Although the results supported the general hypothesis that illness was associated with exposure to the WDBs, this conclusion was tempered by several study limitations. Exposure to specific agents was not demonstrated, study participants were not randomly selected, and double-blinding procedures were not used. Additional human and animal studies are needed to confirm this conclusion, investigate the role of complex mixtures of bacteria, fungi, mycotoxins, endotoxins, and antigens in illness causation, and characterize modes of action. Such data will improve the assessment of human health risk from chronic exposure to WDBs.
ERIC Educational Resources Information Center
Gu, Lin; Lockwood, John; Powers, Donald E.
2015-01-01
Standardized tests are often designed to provide only a snapshot of test takers' knowledge, skills, or abilities at a single point in time. Sometimes, however, they are expected to serve more demanding functions, one of them is assessing change in knowledge, skills, or ability over time because of learning effects.The latter is the case for the…
Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera
NASA Astrophysics Data System (ADS)
Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund
2016-03-01
We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.
Fuel-optimal, low-thrust transfers between libration point orbits
NASA Astrophysics Data System (ADS)
Stuart, Jeffrey R.
Mission design requires the efficient management of spacecraft fuel to reduce mission cost, increase payload mass, and extend mission life. High efficiency, low-thrust propulsion devices potentially offer significant propellant reductions. Periodic orbits that exist in a multi-body regime and low-thrust transfers between these orbits can be applied in many potential mission scenarios, including scientific observation and communications missions as well as cargo transport. In light of the recent discovery of water ice in lunar craters, libration point orbits that support human missions within the Earth-Moon region are of particular interest. This investigation considers orbit transfer trajectories generated by a variable specific impulse, low-thrust engine with a primer-vector-based, fuel-optimizing transfer strategy. A multiple shooting procedure with analytical gradients yields rapid solutions and serves as the basis for an investigation into the trade space between flight time and consumption of fuel mass. Path and performance constraints can be included at node points along any thrust arc. Integration of invariant manifolds into the design strategy may also yield improved performance and greater fuel savings. The resultant transfers offer insight into the performance of the variable specific impulse engine and suggest novel implementations of conventional impulsive thrusters. Transfers incorporating invariant manifolds demonstrate the fuel savings and expand the mission design capabilities that are gained by exploiting system symmetry. A number of design applications are generated.
Effect of area ratio on the performance of a 5.5:1 pressure ratio centrifugal impeller
NASA Technical Reports Server (NTRS)
Schumann, L. F.; Clark, D. A.; Wood, J. R.
1986-01-01
A centrifugal impeller which was initially designed for a pressure ratio of approximately 5.5 and a mass flow rate of 0.959 kg/sec was tested with a vaneless diffuser for a range of design point impeller area ratios from 2.322 to 2.945. The impeller area ratio was changed by successively cutting back the impeller exit axial width from an initial value of 7.57 mm to a final value of 5.97 mm. In all, four separate area ratios were tested. For each area ratio a series of impeller exit axial clearances was also tested. Test results are based on impeller exit surveys of total pressure, total temperature, and flow angle at a radius 1.115 times the impeller exit radius. Results of the tests at design speed, peak efficiency, and an exit tip clearance of 8 percent of exit blade height show that the impeller equivalent pressure recovery coefficient peaked at a design point area ratio of approximately 2.748 while the impeller aerodynamic efficiency peaked at a lower value of area ratio of approximately 2.55. The variation of impeller efficiency with clearance showed expected trends with a loss of approximately 0.4 points in impeller efficiency for each percent increase in exit axial tip clearance for all impellers tested.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
Basic design studies for a 600 MWe CFB boiler (270b, 2 x 600 C)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bursi, J.M.; Lafanechere, L.; Jestin, L.
Commercial CFB boilers are currently available in the 300 MWe equivalent range for use with international coal. Retrofitting of Provence 4 with a 250 MWe CFB boiler was an important step in CFB development. In light of the results obtained from two large French units--Emile Huchet 4 (125 MWe) and Provence 4 (250 MWe)--this paper focuses on the main technical points which are currently being studied in relation to the basic design of a 600 MWe CFB boiler, a project that has been undertaken by EDF. The general aim of this project is to demonstrate the competitiveness of a CFBmore » boiler compared with a PF boiler. The main areas of focus in the design of this large CFB boiler with advanced steam conditions are described. These points are subjected to particular analysis from a design standpoint. The objective is to prepare the precise specifications needed to ensure a product which is optimized in terms of quality/cost or service/cost. Due to the present lack of theoretical understanding of the refined and complex two-phase flow, design is a challenge which has to be based on reliable and comprehensive data obtained from large plants in commercial operation. This will ensure that the advantages of CFB which arise from the hydrodynamics within the circulation loop are maintained. The major goals of maintaining good particle residence time and concentration in the furnace are described. Misunderstanding of CFB furnace bottom conditions is also pointed out, with cost reduction and better NO{sub x} capture certainly among the major new targets in relation to bottom furnace design. General problems associated with the heat exchanger arrangement, principally those linked to high steam conditions and, especially, the vaporization system, are discussed. Once again, comparison with PF in this area showed that CFB boilers appear more competitive. Finally, the main area in which there is a need for sharing of CFB experience among CFB users is pointed out.« less
Illumination system development using design and analysis of computer experiments
NASA Astrophysics Data System (ADS)
Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter
2015-09-01
Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.
Ramesh, S; Seshasayanan, R
2016-01-01
In this study, a baseband OFDM-MIMO framework with channel timing and estimation synchronization is composed and executed utilizing the FPGA innovation. The framework is prototyped in light of the IEEE 802.11a standard and the signals transmitted and received utilizing a data transmission of 20 MHz. With the assistance of the QPSK tweak, the framework can accomplish a throughput of 24 Mbps. Besides, the LS formula is executed and the estimation of a frequency-specific fading channel is illustrated. For the rough estimation of timing, MNC plan is examined and actualized. Above all else, the whole framework is demonstrated in MATLAB and a drifting point model is set up. At that point, the altered point model is made with the assistance of Simulink and Xilinx's System Generator for DSP. In this way, the framework is incorporated and actualized inside of Xilinx's ISE tools and focused to Xilinx Virtex 5 board. In addition, an equipment co-simulation is contrived to decrease the preparing time while figuring the BER of the fixed point model. The work concentrates on above all else venture for further examination of planning creative channel estimation strategies towards applications in the fourth era (4G) mobile correspondence frameworks.
Research of HCR Gearing Properties from Warm Scuffing Damage Point of View
NASA Astrophysics Data System (ADS)
Kuzmanović, Siniša; Rackov, Milan; Vereš, Miroslav; Krajčovič, Adam
2014-12-01
The issue of design and dimensioning of HCR gearing, particularly of the gearings with an internal engagement, it nowadays, especially in the design of hybrid cars drives, highly topical. This kind of gearing has many advantages in operation, but at the same time it is more complicated in stage of its design and load capacity calculation. Authors in this contribution present some results of temperature scuffing research of internal and external HCR gearing. There are given the equations for calculation of warm scuffing resistance of both external and internal HCR gearing derived according to the integral temperature criterion.
Novel Techniques for Pulsed Field Gradient NMR Measurements
NASA Astrophysics Data System (ADS)
Brey, William Wallace
Pulsed field gradient (PFG) techniques now find application in multiple quantum filtering and diffusion experiments as well as in magnetic resonance imaging and spatially selective spectroscopy. Conventionally, the gradient fields are produced by azimuthal and longitudinal currents on the surfaces of one or two cylinders. Using a series of planar units consisting of azimuthal and radial current elements spaced along the longitudinal axis, we have designed gradient coils having linear regions that extend axially nearly to the ends of the coil and to more than 80% of the inner radius. These designs locate the current return paths on a concentric cylinder, so the coils are called Concentric Return Path (CRP) coils. Coils having extended linear regions can be made smaller for a given sample size. Among the advantages that can accrue from using smaller coils are improved gradient strength and switching time, reduced eddy currents in the absence of shielding, and improved use of bore space. We used an approximation technique to predict the remaining eddy currents and a time-domain model of coil performance to simulate the electrical performance of the CRP coil and several reduced volume coils of more conventional design. One of the conventional coils was designed based on the time-domain performance model. A single-point acquisition technique was developed to measure the remaining eddy currents of the reduced volume coils. Adaptive sampling increases the dynamic range of the measurement. Measuring only the center of the stimulated echo removes chemical shift and B_0 inhomogeneity effects. The technique was also used to design an inverse filter to remove the eddy current effects in a larger coil set. We added pulsed field gradient and imaging capability to a 7 T commercial spectrometer to perform neuroscience and embryology research and used it in preliminary studies of binary liquid mixtures separating near a critical point. These techniques and coil designs will find application in research areas ranging from functional imaging to NMR microscopy.
Chen, Wen-Jie; Xiao, Meng; Chan, C. T.
2016-01-01
Weyl points, as monopoles of Berry curvature in momentum space, have captured much attention recently in various branches of physics. Realizing topological materials that exhibit such nodal points is challenging and indeed, Weyl points have been found experimentally in transition metal arsenide and phosphide and gyroid photonic crystal whose structure is complex. If realizing even the simplest type of single Weyl nodes with a topological charge of 1 is difficult, then making a real crystal carrying higher topological charges may seem more challenging. Here we design, and fabricate using planar fabrication technology, a photonic crystal possessing single Weyl points (including type-II nodes) and multiple Weyl points with topological charges of 2 and 3. We characterize this photonic crystal and find nontrivial 2D bulk band gaps for a fixed kz and the associated surface modes. The robustness of these surface states against kz-preserving scattering is experimentally observed for the first time. PMID:27703140
Attitude Design for the LADEE Mission
NASA Technical Reports Server (NTRS)
Galal, Ken; Nickel, Craig; Sherman, Ryan
2015-01-01
The Lunar Atmosphere and Dust Environment Explorer (LADEE) satellite successfully completed its 148-day science investigation in a low-altitude, near-equatorial lunar orbit on April 18, 2014. The LADEE spacecraft was built, managed and operated by NASA's Ames Research Center (ARC). The Mission Operations Center (MOC) was located at Ames and was responsible for activity planning, command sequencing, trajectory and attitude design, orbit determination, and spacecraft operations. The Science Operations Center (SOC) was located at Goddard Space Flight Center and was responsible for science planning, data archiving and distribution. This paper details attitude design and operations support for the LADEE mission. LADEE's attitude design was shaped by a wide range of instrument pointing requirements that necessitated regular excursions from the baseline one revolution per orbit "Ram" attitude. Such attitude excursions were constrained by a number of flight rules levied to protect instruments from the Sun, avoid geometries that would result in simultaneous occlusion of LADEE's two star tracker heads, and maintain the spacecraft within its thermal and power operating limits. To satisfy LADEE's many attitude requirements and constraints, a set of rules and conventions was adopted to manage the complexity of this design challenge and facilitate the automation of ground software that generated pointing commands spanning multiple days of operations at a time. The resulting LADEE Flight Dynamics System (FDS) that was developed used Visual Basic scripts that generated instructions to AGI's Satellite Tool Kit (STK) in order to derive quaternion commands at regular intervals that satisfied LADEE's pointing requirements. These scripts relied heavily on the powerful "align and constrain" capability of STK's attitude module to construct LADEE's attitude profiles and the slews to get there. A description of the scripts and the attitude modeling they embodied is provided. One particular challenge analysts faced was in the design of LADEE maneuver attitudes. A flight rule requiring pre-maneuver verification of in-flight maneuver conditions by ground operators prior to burn execution resulted in the need to accommodate long periods in the maneuver attitude. This in turn complicated efforts to satisfy star tracker interference and communication constraints in lunar orbit. In response to this challenge, a graphical method was developed and used to survey candidate rotation angles about the thrust vector. This survey method is described and an example of its use on a particular LADEE maneuver is discussed. Finally, the software and methodology used to satisfy LADEE's attitude requirements are also discussed in the context of LADEE's overall activity planning effort. In particular, the way in which strategic schedules of instrument and engineering activities were translated into actual attitude profiles at the tactical level, then converted into precise quaternion commands to achieve those pointing goals is explained. In order to reduce the risk of time-consuming re-planning efforts, this process included the generation of long-term projections of constraint violation predictions for individual attitude profiles that could be used to establish keep-out time-frames for particular attitude profiles. The challenges experienced and overall efficacy of both the overall LADEE ground system and the attitude components of the Flight Dynamics System in meeting LADEE's varied pointing requirements are discussed.
2016-12-01
HSE BoK. Rule One —Allow cultural change over time. Using the Mastering Group and the Creating column (six points), the objective would be, “ Design ...is designed to address these topics. This introduction is followed by a literature review to establish some background on the demands for information...relies on key components of large group dynamics. The first is playful creation or a “loose, playful atmosphere and fun at work” that makes wikis
Riley, Bettina H; Dearmon, Valorie; Mestas, Lisa; Buckner, Ellen B
2016-01-01
Improving health care quality is the responsibility of nurses at all levels of the organization. This article describes a study that examined frontline staff nurses' professional practice characteristics to advance leadership through the understanding of relationships among practice environment, quality improvement, and outcomes. The study design was a descriptive quantitative design at 2 time points. Findings support the use of research and quality processes to build leadership capacity required for positive resolution of interdisciplinary operational failures.
High-freezing-point fuel studies
NASA Technical Reports Server (NTRS)
Tolle, F. F.
1980-01-01
Considerable progress in developing the experimental and analytical techniques needed to design airplanes to accommodate fuels with less stringent low temperature specifications is reported. A computer technique for calculating fuel temperature profiles in full tanks was developed. The computer program is being extended to include the case of partially empty tanks. Ultimately, the completed package is to be incorporated into an aircraft fuel tank thermal analyser code to permit the designer to fly various thermal exposure patterns, study fuel temperatures versus time, and determine holdup.
Description of the Spacecraft Control Laboratory Experiment (SCOLE) facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.; Rallo, Rosemary A.
1987-01-01
A laboratory facility for the study of control laws for large flexible spacecraft is described. The facility fulfills the requirements of the Spacecraft Control Laboratory Experiment (SCOLE) design challenge for a laboratory experiment, which will allow slew maneuvers and pointing operations. The structural apparatus is described in detail sufficient for modelling purposes. The sensor and actuator types and characteristics are described so that identification and control algorithms may be designed. The control implementation computer and real-time subroutines are also described.
Description of the Spacecraft Control Laboratory Experiment (SCOLE) facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.; Rallo, Rosemary A.
1987-01-01
A laboratory facility for the study of control laws for large flexible spacecraft is described. The facility fulfills the requirements of the Spacecraft Control Laboratory Experiment (SCOLE) design challenge for laboratory experiments, which will allow slew maneuvers and pointing operations. The structural apparatus is described in detail sufficient for modelling purposes. The sensor and actuator types and characteristics are described so that identification and control algorithms may be designed. The control implementation computer and real-time subroutines are also described.
2017-01-23
5e. TASK NUMBER N/A 5f. WORK UNIT NUMBER N/A 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) RDECOM-TARDEC-ACT Attn...occupant work space, central 90% of the Soldier population, encumbrance, posture and position, verification and validation, computer aided design...factors engineers could benefit by working with vehicle designers to perform virtual assessments in CAD when there is not enough time and/or funding to
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Burley, Casey L.; Guo, Yueping
2016-01-01
Aircraft system noise predictions have been performed for NASA modeled hybrid wing body aircraft advanced concepts with 2025 entry-into-service technology assumptions. The system noise predictions developed over a period from 2009 to 2016 as a result of improved modeling of the aircraft concepts, design changes, technology development, flight path modeling, and the use of extensive integrated system level experimental data. In addition, the system noise prediction models and process have been improved in many ways. An additional process is developed here for quantifying the uncertainty with a 95% confidence level. This uncertainty applies only to the aircraft system noise prediction process. For three points in time during this period, the vehicle designs, technologies, and noise prediction process are documented. For each of the three predictions, and with the information available at each of those points in time, the uncertainty is quantified using the direct Monte Carlo method with 10,000 simulations. For the prediction of cumulative noise of an advanced aircraft at the conceptual level of design, the total uncertainty band has been reduced from 12.2 to 9.6 EPNL dB. A value of 3.6 EPNL dB is proposed as the lower limit of uncertainty possible for the cumulative system noise prediction of an advanced aircraft concept.
Mohajeri Amiri, Morteza; Fazeli, Mohammad Reza; Amini, Mohsen; Hayati Roodbari, Nasim; Samadi, Nasrin
2017-01-01
Designing enriched probiotic supplements may have some advantages including protection of probiotic microorganism from oxidative destruction, improving enzyme activity of the gastrointestinal tract, and probably increasing half-life of micronutrient. In this study Saccharomyces cerevisiae enriched with dl-α-tocopherol was produced as an accumulator and transporter of a lipid soluble vitamin for the first time. By using one variable at the time screening studies, three independent variables were selected. Optimization of the level of dl-α-tocopherol entrapment in S. cerevisiae cells was performed by using Box-Behnken design via design expert software. A modified quadratic polynomial model appropriately fit the data. The convex shape of three-dimensional plots reveal that we could calculate the optimal point of the response in the range of parameters. The optimum points of independent parameters to maximize the response were dl-α-tocopherol initial concentration of 7625.82 µg/mL, sucrose concentration of 6.86 % w/v, and shaking speed of 137.70 rpm. Under these conditions, the maximum level of dl-α-tocopherol in dry cell weight of S. cerevisiae was 5.74 µg/g. The resemblance between the R-squared and adjusted R-squared and acceptable value of C.V% revealed acceptability and accuracy of the model.
Determination of technological parameters in strip mining by time-of-flight and image processing
NASA Astrophysics Data System (ADS)
Elandaloussi, Frank; Mueller, B.; Osten, Wolfgang
1999-09-01
The conveying and dumping of earth masses lying over the coal seam in lignite surface mining is done usually by overburden conveyor bridges. The overburden, obtained from connected excavators, is transported over the bridge construction using a conveyor belt system and poured into one front dump and three surface dumps. The shaping of the dump growth is of great importance both to guaranty the stability of the masses dumped to earth stocks as well as the whole construction and to prepare the area for re-cultivation. This article describes three measurement systems: one to determine the impact point of the dumped earth masses, one to determine the shape of the entire mining process and the other a sensor for the loading of the conveyor belt. For the first measurement system, a real-time video system has been designed, set-up and installed that is capable to determine the impact point of all three dumps simultaneously. The second measurement system is a connection of 5 special designed laser distance measuring instruments, that are able to measure the shape of the mining process under unfavorable environmental conditions like dust, high temperature changes, heavy shocks etc. The third sensor is designed for monitoring the transportation of the masses via the conveyor belt system.
An Exploratory Analysis of Waterfront Force Protection Measures Using Simulation
2002-03-01
LEFT BLANK 75 APPENDIX B. DESIGN POINT DATA Table 16. Design Point One Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.002469 0.006237 27.63104 7144.875 0.155223 76 Table 17. Design Point Two Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.001163 4.67E-12 29.80891 6393.874 0.188209 77 Table 18. Design Point Three Data breach - count leakers- count numberAv ailablePBs- mean
Uncertainty management in intelligent design aiding systems
NASA Technical Reports Server (NTRS)
Brown, Donald E.; Gabbert, Paula S.
1988-01-01
A novel approach to uncertainty management which is particularly effective in intelligent design aiding systems for large-scale systems is presented. The use of this approach in the materials handling system design domain is discussed. It is noted that, during any point in the design process, a point value can be obtained for the evaluation of feasible designs; however, the techniques described provide unique solutions for these point values using only the current information about the design environment.
MAP Attitude Control System Design and Analysis
NASA Technical Reports Server (NTRS)
Andrews, S. F.; Campbell, C. E.; Ericsson-Jackson, A. J.; Markley, F. L.; ODonnell, J. R., Jr.
1997-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The MAP spacecraft will perform its mission in a Lissajous orbit around the Earth-Sun L(sub 2) Lagrange point to suppress potential instrument disturbances. To make a full-sky map of cosmic microwave background fluctuations, a combination fast spin and slow precession motion will be used. MAP requires a propulsion system to reach L(sub 2), to unload system momentum, and to perform stationkeeping maneuvers once at L(sub 2). A minimum hardware, power and thermal safe control mode must also be provided. Sufficient attitude knowledge must be provided to yield instrument pointing to a standard deviation of 1.8 arc-minutes. The short development time and tight budgets require a new way of designing, simulating, and analyzing the Attitude Control System (ACS). This paper presents the design and analysis of the control system to meet these requirements.
GREEN: A program package for docking studies in rational drug design
NASA Astrophysics Data System (ADS)
Tomioka, Nobuo; Itai, Akiko
1994-08-01
A program package, GREEN, has been developed that enables docking studies between ligand molecules and a protein molecule. Based on the structure of the protein molecule, the physical and chemical environment of the ligand-binding site is expressed as three-dimensional grid-point data. The grid-point data are used for the real-time evaluation of the protein-ligand interaction energy, as well as for the graphical representation of the binding-site environment. The interactive docking operation is facilitated by various built-in functions, such as energy minimization, energy contribution analysis and logging of the manipulation trajectory. Interactive modeling functions are incorporated for designing new ligand molecules while considering the binding-site environment and the protein-ligand interaction. As an example of the application of GREEN, a docking study is presented on the complex between trypsin and a synthetic trypsin inhibitor. The program package will be useful for rational drug design, based on the 3D structure of the target protein.
Study of the Army Helicopter Design Hover Criterion Using Temperature and Pressure Altitude
2017-09-01
the Advanced Scout Helicopter Special Study Group reexamined the design point requirement. They recommended increasing the design point pressure...other combinations group between these two extremes. Ultimately, the design point for a helicopter has to be determined by the user of the...helicopter designs . 6. References Aviation Agency. 1972. “Heavy Lift Helicopter (HLH) Concept Formulation Study (U)”, Action Control Number 2958
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
A Longitudinal Analysis of Students' Motives for Communicating with Their Instructors
ERIC Educational Resources Information Center
Myers, Scott A.
2017-01-01
This study utilized the longitudinal survey research design using students' motives to communicate with their instructors as a test case. Participants were 282 undergraduate students enrolled in introductory communication courses at a large Mid-Atlantic university who completed the Student Communication Motives scale at three points (Time 1:…
Community Destruction and Traumatic Stress in Post-Tsunami Indonesia
ERIC Educational Resources Information Center
Frankenberg, Elizabeth; Nobles, Jenna; Sumantri, Cecep
2012-01-01
How are individuals affected when the communities they live in change for the worse? This question is central to understanding neighborhood effects, but few study designs generate estimates that can be interpreted causally. We address issues of inference through a natural experiment, examining post-traumatic stress at multiple time points in a…
Methods for measuring populations of small, diurnal forest birds.
D.A. Manuwal; A.B. Carey
1991-01-01
Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...
49 CFR 178.338-9 - Holding time.
Code of Federal Regulations, 2014 CFR
2014-10-01
... cryogenic liquid having a boiling point, at a pressure of one atmosphere, absolute, no lower than the design... that liquid and stabilized to the lowest practical pressure, which must be equal to or less than the... combined liquid and vapor lading at the pressure offered for transportation, and the set pressure of the...
The Development of Preschoolers' Appreciation of Communicative Ambiguity
ERIC Educational Resources Information Center
Nilsen, Elizabeth S.; Graham, Susan A.
2012-01-01
Using a longitudinal design, preschoolers' appreciation of a listener's knowledge of the location of a hidden sticker after the listener was provided with an ambiguous or unambiguous description was assessed. Preschoolers (N = 34) were tested at 3 time points, each 6 months apart (4, 4 1/2, and 5 years). Eye gaze measures demonstrated that…
The New Biochemistry: Blending the Traditional with the Other.
ERIC Educational Resources Information Center
Boyer, Rodney
2000-01-01
Points out the difficulties in designing and presenting a modern chemistry course with an overabundance of topics to cover in a limited amount of class time. Presents a model syllabi for biochemistry majors and the shorter survey course for non-majors, usually consisting of health professionals and biological science majors. (Contains 24…
Educational Leadership Effectiveness: A Rasch Analysis
ERIC Educational Resources Information Center
Sinnema, Claire; Ludlow, Larry; Robinson, Viviane
2016-01-01
Purpose: The purposes of this paper are, first, to establish the psychometric properties of the ELP tool, and, second, to test, using a Rasch item response theory analysis, the hypothesized progression of challenge presented by the items included in the tool. Design/ Methodology/ Approach: Data were collected at two time points through a survey of…
Khosravi, Morteza; Arabi, Simin
In this study, iron zero-valent nanoparticles were synthesized, characterized and studied for removal of methylene blue dye in water solution. The reactions were mathematically described as the function of parameters such as nano zero-valent iron (NZVI) dose, pH, contact time and initial dye concentration, and were modeled by the use of response surface methodology. These experiments were carried out as a central composite design consisting of 30 experiments determined by the 2(4) full factorial designs with eight axial points and six center points. The results revealed that the optimal conditions for dye removal were NZVI dose 0.1-0.9 g/L, pH 3-11, contact time 20-100 s, and initial dye concentration 10-50 mg/L, respectively. Under these optimal values of process parameters, the dye removal efficiency of 92.87% was observed, which very close to the experimental value (92.21%) in batch experiment. In the optimization, R(2) and R(2)adj correlation coefficients for the model were evaluated as 0.96 and 0.93, respectively.
Rasouli, H; Fatehi, A
2014-12-01
In this paper, a simple method is presented for tuning weighted PI(λ) + D(μ) controller parameters based on the pole placement controller of pseudo-second-order fractional systems. One of the advantages of this controller is capability of reducing the disturbance effects and improving response to input, simultaneously. In the following sections, the performance of this controller is evaluated experimentally to control the vertical magnetic flux in Damavand tokamak. For this work, at first a fractional order model is identified using output-error technique in time domain. For various practical experiments, having desired time responses for magnetic flux in Damavand tokamak, is vital. To approach this, at first the desired closed loop reference models are obtained based on generalized characteristic ratio assignment method in fractional order systems. After that, for the identified model, a set-point weighting PI(λ) + D(μ) controller is designed and simulated. Finally, this controller is implemented on digital signal processor control system of the plant to fast/slow control of magnetic flux. The practical results show appropriate performance of this controller.
Pseudo-time-reversal symmetry and topological edge states in two-dimensional acoustic crystals
Mei, Jun; Chen, Zeguo; Wu, Ying
2016-01-01
We propose a simple two-dimensional acoustic crystal to realize topologically protected edge states for acoustic waves. The acoustic crystal is composed of a triangular array of core-shell cylinders embedded in a water host. By utilizing the point group symmetry of two doubly degenerate eigenstates at the Γ point, we can construct pseudo-time-reversal symmetry as well as pseudo-spin states in this classical system. We develop an effective Hamiltonian for the associated dispersion bands around the Brillouin zone center, and find the inherent link between the band inversion and the topological phase transition. With numerical simulations, we unambiguously demonstrate the unidirectional propagation of acoustic edge states along the interface between a topologically nontrivial acoustic crystal and a trivial one, and the robustness of the edge states against defects with sharp bends. Our work provides a new design paradigm for manipulating and transporting acoustic waves in a topologically protected manner. Technological applications and devices based on our design are expected in various frequency ranges of interest, spanning from infrasound to ultrasound. PMID:27587311
Research on photodiode detector-based spatial transient light detection and processing system
NASA Astrophysics Data System (ADS)
Liu, Meiying; Wang, Hu; Liu, Yang; Zhao, Hui; Nan, Meng
2016-10-01
In order to realize real-time signal identification and processing of spatial transient light, the features and the energy of the captured target light signal are first described and quantitatively calculated. Considering that the transient light signal has random occurrence, a short duration and an evident beginning and ending, a photodiode detector based spatial transient light detection and processing system is proposed and designed in this paper. This system has a large field of view and is used to realize non-imaging energy detection of random, transient and weak point target under complex background of spatial environment. Weak signal extraction under strong background is difficult. In this paper, considering that the background signal changes slowly and the target signal changes quickly, filter is adopted for signal's background subtraction. A variable speed sampling is realized by the way of sampling data points with a gradually increased interval. The two dilemmas that real-time processing of large amount of data and power consumption required by the large amount of data needed to be stored are solved. The test results with self-made simulative signal demonstrate the effectiveness of the design scheme. The practical system could be operated reliably. The detection and processing of the target signal under the strong sunlight background was realized. The results indicate that the system can realize real-time detection of target signal's characteristic waveform and monitor the system working parameters. The prototype design could be used in a variety of engineering applications.
A real-time hybrid neuron network for highly parallel cognitive systems.
Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene
2016-08-01
For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.
Modelling short time series in metabolomics: a functional data analysis approach.
Montana, Giovanni; Berk, Maurice; Ebbels, Tim
2011-01-01
Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
Areal-reduction factors for the precipitation of the 1-day design storm in Texas
Asquith, William H.
1999-01-01
The reduction of the precipitation depth from a design storm for a point to an effective (mean) depth over a watershed often is important for cost-effective design of hydraulic structures by reducing the volume of precipitation. A design storm for a point is the depth of precipitation that has a specified duration and frequency (recurrence interval). The effective depth can be calculated by multiplying the design-storm depth by an areal-reduction factor (ARF). ARF ranges from 0 to 1, varies with the recurrence interval of the design storm, and is a function of watershed characteristics such as watershed size and shape, geographic location, and time of year that the design storm occurs. This report documents an investigation of ARF by the U.S. Geological Survey, in cooperation with the Texas Department of Transportation, for the 1-day design storm for Austin, Dallas, and Houston, Texas. The ?annual maxima-centered? approach used in this report specifically considers the distribution of concurrent precipitation surrounding an annual precipitation maxima. Unlike previously established approaches, the annual maxima-centered approach does not require the spatial averaging of precipitation nor explicit definition of a representative area of a particular storm in the analysis. Graphs of the relation between ARF and circular watershed area (to about 7,000 square miles) are provided, and a technique to calculate ARF for noncircular watersheds is discussed.
CASOAR - An infrared active wave front sensor for atmospheric turbulence analysis
NASA Astrophysics Data System (ADS)
Cariou, Jean-Pierre; Dolfi, Agnes
1992-12-01
Knowledge of deformation of every point of a wave front over time allows statistical turbulence parameters to be analyzed, and the definition of real time adaptive optics to be designed. An optical instrumentation was built to meet this need. Integrated in a compact enclosure for experiments on outdoor sites, the CASOAR allows the deformations of a wave front to be measured rapidly (100 Hz) and with accuracy (1 deg). The CASOAR is an active system: it includes its own light source (CW CO2 laser), making it self-contained, self-aligned and insensitive to spurious light rays. After being reflected off a mirror located beyond the atmospheric layer to be analyzed (range of several kilometers), the beam is received and detected by coherent mixing. Electronic phase is converted in optical phase and recorded or displayed in real time on a monitor. Experimental results are shown, pointing out the capabilities of this device.
Rorie, David A; Rogers, Amy; Mackenzie, Isla S; Ford, Ian; Webb, David J; Willams, Bryan; Brown, Morris; Poulter, Neil; Findlay, Evelyn; Saywood, Wendy; MacDonald, Thomas M
2016-02-09
Nocturnal blood pressure (BP) appears to be a better predictor of cardiovascular outcome than daytime BP. The BP lowering effects of most antihypertensive therapies are often greater in the first 12 h compared to the next 12 h. The Treatment In Morning versus Evening (TIME) study aims to establish whether evening dosing is more cardioprotective than morning dosing. The TIME study uses the prospective, randomised, open-label, blinded end-point (PROBE) design. TIME recruits participants by advertising in the community, from primary and secondary care, and from databases of consented patients in the UK. Participants must be aged over 18 years, prescribed at least one antihypertensive drug taken once a day, and have a valid email address. After the participants have self-enrolled and consented on the secure TIME website (http://www.timestudy.co.uk) they are randomised to take their antihypertensive medication in the morning or the evening. Participant follow-ups are conducted after 1 month and then every 3 months by automated email. The trial is expected to run for 5 years, randomising 10,269 participants, with average participant follow-up being 4 years. The primary end point is hospitalisation for the composite end point of non-fatal myocardial infarction (MI), non-fatal stroke (cerebrovascular accident; CVA) or any vascular death determined by record-linkage. Secondary end points are: each component of the primary end point, hospitalisation for non-fatal stroke, hospitalisation for non-fatal MI, cardiovascular death, all-cause mortality, hospitalisation or death from congestive heart failure. The primary outcome will be a comparison of time to first event comparing morning versus evening dosing using an intention-to-treat analysis. The sample size is calculated for a two-sided test to detect 20% superiority at 80% power. TIME has ethical approval in the UK, and results will be published in a peer-reviewed journal. UKCRN17071; Pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Laser guide star pointing camera for ESO LGS Facilities
NASA Astrophysics Data System (ADS)
Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.
2014-08-01
Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.
NASA Astrophysics Data System (ADS)
Takemiya, Tetsushi
In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.
Annular suspension and pointing system with controlled DC electromagnets
NASA Technical Reports Server (NTRS)
Vu, Josephine Lynn; Tam, Kwok Hung
1993-01-01
The Annular Suspension and Pointing System (ASPS) developed by the Flight System division of Sperry Corporation is a six-degree of freedom payload pointing system designed for use with the space shuttle. This magnetic suspension and pointing system provides precise controlled pointing in six-degrees of freedom, isolation of payload-carrier disturbances, and end mount controlled pointing. Those are great advantages over the traditional mechanical joints for space applications. In this design, we first analyzed the assumed model of the single degree ASPS bearing actuator and obtained the plant dynamics equations. By linearizing the plant dynamics equations, we designed the cascade and feedback compensators such that a stable and satisfied result was obtained. The specified feedback compensator was computer simulated with the nonlinearized plant dynamics equations. The results indicated that an unstable output occurred. In other words, the designed feedback compensator failed. The failure of the design is due to the Taylor's series expansion not converging.
48 CFR 47.303-15 - F.o.b. designated air carrier's terminal, point of exportation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false F.o.b. designated air... Contracts 47.303-15 F.o.b. designated air carrier's terminal, point of exportation. (a) Explanation of delivery term. F.o.b. designated air carrier's terminal, point of exportation means free of expense to the...
48 CFR 47.303-16 - F.o.b. designated air carrier's terminal, point of importation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false F.o.b. designated air... Contracts 47.303-16 F.o.b. designated air carrier's terminal, point of importation. (a) Explanation of delivery term. F.o.b. designated air carrier's terminal, point of importation means free of expense to the...
Scaled accelerographs for design of structures in Quetta, Baluchistan, Pakistan
NASA Astrophysics Data System (ADS)
Bhatti, Abdul Qadir
2016-12-01
Structural design for seismic excitation is usually based on peak values of forces and deformations over the duration of earthquake. In determining these peak values dynamic analysis is done which requires either response history analysis (RHA), also called time history analysis, or response spectrum analysis (RSA), both of which depend upon ground motion severity. In the past, PGA has been used to describe ground motion severity, because seismic force on a rigid body is proportional to the ground acceleration. However, it has been pointed out that single highest peak on accelerograms is a very unreliable description of the accelerograms as a whole. In this study, we are considering 0.2- and 1-s spectral acceleration. Seismic loading has been defined in terms of design spectrum and time history which will lead us to two methods of dynamic analysis. Design spectrum for Quetta will be constructed incorporating the parameters of ASCE 7-05/IBC 2006/2009, which is being used by modern codes and regulation of the world like IBC 2006/2009, ASCE 7-05, ATC-40, FEMA-356 and others. A suite of time history representing design earthquake will also be prepared, this will be a helpful tool to carryout time history dynamic analysis of structures in Quetta.
Assessing Freshman Engineering Students' Understanding of Ethical Behavior.
Henslee, Amber M; Murray, Susan L; Olbricht, Gayla R; Ludlow, Douglas K; Hays, Malcolm E; Nelson, Hannah M
2017-02-01
Academic dishonesty, including cheating and plagiarism, is on the rise in colleges, particularly among engineering students. While students decide to engage in these behaviors for many different reasons, academic integrity training can help improve their understanding of ethical decision making. The two studies outlined in this paper assess the effectiveness of an online module in increasing academic integrity among first semester engineering students. Study 1 tested the effectiveness of an academic honesty tutorial by using a between groups design with a Time 1- and Time 2-test. An academic honesty quiz assessed participants' knowledge at both time points. Study 2, which incorporated an improved version of the module and quiz, utilized a between groups design with three assessment time points. The additional Time 3-test allowed researchers to test for retention of information. Results were analyzed using ANCOVA and t tests. In Study 1, the experimental group exhibited significant improvement on the plagiarism items, but not the total score. However, at Time 2 there was no significant difference between groups after controlling for Time 1 scores. In Study 2, between- and within-group analyses suggest there was a significant improvement in total scores, but not plagiarism scores, after exposure to the tutorial. Overall, the academic integrity module impacted participants as evidenced by changes in total score and on specific plagiarism items. Although future implementation of the tutorial and quiz would benefit from modifications to reduce ceiling effects and improve assessment of knowledge, the results suggest such tutorial may be one valuable element in a systems approach to improving the academic integrity of engineering students.
Structural Controllability of Temporal Networks with a Single Switching Controller
Yao, Peng; Hou, Bao-Yu; Pan, Yu-Jian; Li, Xiang
2017-01-01
Temporal network, whose topology evolves with time, is an important class of complex networks. Temporal trees of a temporal network describe the necessary edges sustaining the network as well as their active time points. By a switching controller which properly selects its location with time, temporal trees are used to improve the controllability of the network. Therefore, more nodes are controlled within the limited time. Several switching strategies to efficiently select the location of the controller are designed, which are verified with synthetic and empirical temporal networks to achieve better control performance. PMID:28107538
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
NASA Astrophysics Data System (ADS)
Onut, S.; Kamber, M. R.; Altay, G.
2014-03-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.
Effect of two doses of ginkgo biloba extract (EGb 761) on the dual-coding test in elderly subjects.
Allain, H; Raoul, P; Lieury, A; LeCoz, F; Gandon, J M; d'Arbigny, P
1993-01-01
The subjects of this double-blind study were 18 elderly men and women (mean age, 69.3 years) with slight age-related memory impairment. In a crossover-study design, each subject received placebo or an extract of Ginkgo biloba (EGb 761) (320 mg or 600 mg) 1 hour before performing a dual-coding test that measures the speed of information processing; the test consists of several coding series of drawings and words presented at decreasing times of 1920, 960, 480, 240, and 120 ms. The dual-coding phenomenon (a break point between coding verbal material and images) was demonstrated in all the tests. After placebo, the break point was observed at 960 ms and dual coding beginning at 1920 ms. After each dose of the ginkgo extract, the break point (at 480 ms) and dual coding (at 960 ms) were significantly shifted toward a shorter presentation time, indicating an improvement in the speed of information processing.
NASA Astrophysics Data System (ADS)
Filipiak-Kowszyk, Daria; Janowski, Artur; Kamiński, Waldemar; Makowska, Karolina; Szulwic, Jakub; Wilde, Krzysztof
2016-12-01
The study raises the issues concerning the automatic system designed for the monitoring of movement of controlled points, located on the roof covering of the Forest Opera in Sopot. It presents the calculation algorithm proposed by authors. It takes into account the specific design and location of the test object. High forest stand makes it difficult to use distant reference points. Hence the reference points used to study the stability of the measuring position are located on the ground elements of the sixmeter-deep concrete foundations, from which the steel arches are derived to support the roof covering (membrane) of the Forest Opera. The tacheometer used in the measurements is located in the glass body placed on a special platform attached to the steel arcs. Measurements of horizontal directions, vertical angles and distances can be additionally subject to errors caused by the laser beam penetration through the glass. Dynamic changes of weather conditions, including the temperature and pressure also have a significant impact on the value of measurement errors, and thus the accuracy of the final determinations represented by the relevant covariance matrices. The estimated coordinates of the reference points, controlled points and tacheometer along with the corresponding covariance matrices obtained from the calculations in the various epochs are used to determine the significance of acquired movements. In case of the stability of reference points, the algorithm assumes the ability to study changes in the position of tacheometer in time, on the basis of measurements performed on these points.
Structural analysis of three space crane articulated-truss joint concepts
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Sutter, Thomas R.
1992-01-01
Three space crane articulated truss joint concepts are studied to evaluate their static structural performance over a range of geometric design parameters. Emphasis is placed on maintaining the four longeron reference truss performance across the joint while allowing large angle articulation. A maximum positive articulation angle and the actuator length ratio required to reach the angle are computed for each concept as the design parameters are varied. Configurations with a maximum articulation angle less than 120 degrees or actuators requiring a length ratio over two are not considered. Tip rotation and lateral deflection of a truss beam with an articulated truss joint at the midspan are used to select a point design for each concept. Deflections for one point design are up to 40 percent higher than for the other two designs. Dynamic performance of the three point design is computed as a function of joint articulation angle. The two lowest frequencies of each point design are relatively insensitive to large variations in joint articulation angle. One point design has a higher maximum tip velocity for the emergency stop than the other designs.
Study design for a hepatitis B vaccine trial.
Lustbader, E D; London, W T; Blumberg, B S
1976-01-01
A short-time trial of small sample size for an evaluation of the hepatitis B vaccine is proposed and designed. The vaccine is based on the premise that antibody to the surface antigen of the hepatitis B virus is protective against viral infection. This premise is verified by using the presence of the surface antigen as the marker of infection and comparing infection rates in renal dialysis patients who had naturally acquired antibody to patients without antibody. Patients with antibody have an extremely low risk of infection. The probability of remaining uninfected decreases at an exponential rate for patients without antibody, implying a constant risk of infection at any point in time. The study design described makes use of this time independence and the observed infection rates to formulate a clinical trial which can be accomplished with a relatively small number of patients. This design might be useful if, in preliminary studies, it is shown that the vaccine produces antibody in the patients and that protection against hepatitis B virus would be beneficial to the patients. PMID:1062809
Mechanism and design of intermittent aeration activated sludge process for nitrogen removal.
Hanhan, Oytun; Insel, Güçlü; Yagci, Nevin Ozgur; Artan, Nazik; Orhon, Derin
2011-01-01
The paper provided a comprehensive evaluation of the mechanism and design of intermittent aeration activated sludge process for nitrogen removal. Based on the specific character of the process the total cycle time, (T(C)), the aerated fraction, (AF), and the cycle time ratio, (CTR) were defined as major design parameters, aside from the sludge age of the system. Their impact on system performance was evaluated by means of process simulation. A rational design procedure was developed on the basis of basic stochiometry and mass balance related to the oxidation and removal of nitrogen under aerobic and anoxic conditions, which enabled selected of operation parameters of optimum performance. The simulation results indicated that the total nitrogen level could be reduced to a minimum level by appropriate manipulation of the aerated fraction and cycle time ratio. They also showed that the effluent total nitrogen could be lowered to around 4.0 mgN/L by adjusting the dissolved oxygen set-point to 0.5 mg/L, a level which promotes simultaneous nitrification and denitrification.
The use of online information resources by nurses.
Wozar, Jody A; Worona, Paul C
2003-04-01
Based on the results of an informal needs assessment, the Usage of Online Information Resources by Nurses Project was designed to provide clinical nurses with accurate medical information at the point of care by introducing them to existing online library resources through instructional classes. Actual usage of the resources was then monitored for a set period of time. A two-hour hands-on class was developed for interested nurses. Participants were instructed in the content and use of several different online resources. A special Web page was designed for this project serving as an access point to the resources. Using a password system and WebTrends trade mark software, individual participant's usage of the resources was monitored for a thirty-day period following the class. At the end of the thirty days, usage results were tabulated, and participants were sent general evaluation forms. Eight participants accessed the project page thirty-nine times in a thirty-day period. The most accessed resource was Primary Care Online (PCO), accessed thirty-three times. PCO was followed by MD Consult (17), Ovid (8), NLM resources (5), and electronic journals (1). The individual with the highest usage accessed the project page thirteen times. Practicing clinical nurses will use online medical information resources if they are first introduced to them and taught how to access and use them. Health sciences librarians can play an important role in providing instruction to this often overlooked population.
[Development of a simultaneous strain and temperature sensor with small-diameter FBG].
Liu, Rong-mei; Liang, Da-kai
2011-03-01
Manufacture of the small diameter FBG was designed. Cross sensitivity of temperature and strain at sensing point was solved. Based on coupled-mode theory, optical properties of the designed FBG were studied. The reflection and transmission spectra of the designed FBG in small diameter were studied A single mode optical fiber, whose cladding diameter is 80 microm, was manufactured to a fiber Bragg grating (phi80FBG). According to spectrum simulation, the grating length and period were chosen as the wavelength was 1528 nm. The connector of the small diameter FBG with demodulation was designed too. In applications, the FBG measures the total deformation including strain due to forces applied to the structures as well as thermal expansion. In order to overcome this inconvenience and to measure both parameters at the same time and location, a novel scheme for simultaneous strain and temperature sensor was presented. Since the uniform strength beam has same deformation at all points, a pair of phi80 FBG was attached on a uniform strength cantilever. One of the FBG was on the upper surface, with the other one on the below. Therefore, the strains at the monitoring points were equal in magnitude but of opposite sign. The strain and temperature in sensing point could be discriminated by matrix equation. The determination of the K is not null and thus matrix inversion is well conditioned, even the values for the K elements are close. Consequently, the cross sensitivity of the FBG with temperature and strain can be experimentally solved. Experiments were carried out to study the strain discriminability of small-diameter FBG sensors. The temperature and strain were calculated and the errors were, respectively, 5% and 6%.
Towards microfluidic reactors for cell-free protein synthesis at the point-of-care
Timm, Andrea C.; Shankles, Peter G.; Foster, Carmen M.; ...
2015-12-22
Cell-free protein synthesis (CFPS) is a powerful technology that allows for optimization of protein production without maintenance of a living system. Integrated within micro- and nano-fluidic architectures, CFPS can be optimized for point-of care use. Here, we describe the development of a microfluidic bioreactor designed to facilitate the production of a single-dose of a therapeutic protein, in a small footprint device at the point-of-care. This new design builds on the use of a long, serpentine channel bioreactor and is enhanced by integrating a nanofabricated membrane to allow exchange of materials between parallel reactor and feeder channels. This engineered membrane facilitatesmore » the exchange of metabolites, energy, and inhibitory species, prolonging the CFPS reaction and increasing protein yield. Membrane permeability can be altered by plasma-enhanced chemical vapor deposition and atomic layer deposition to tune the exchange rate of small molecules. This allows for extended reaction times and improved yields. Further, the reaction product and higher molecular weight components of the transcription/translation machinery in the reactor channel can be retained. As a result, we show that the microscale bioreactor design produces higher protein yields than conventional tube-based batch formats, and that product yields can be dramatically improved by facilitating small molecule exchange within the dual-channel bioreactor.« less
Towards microfluidic reactors for cell-free protein synthesis at the point-of-care
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, Andrea C.; Shankles, Peter G.; Foster, Carmen M.
Cell-free protein synthesis (CFPS) is a powerful technology that allows for optimization of protein production without maintenance of a living system. Integrated within micro- and nano-fluidic architectures, CFPS can be optimized for point-of care use. Here, we describe the development of a microfluidic bioreactor designed to facilitate the production of a single-dose of a therapeutic protein, in a small footprint device at the point-of-care. This new design builds on the use of a long, serpentine channel bioreactor and is enhanced by integrating a nanofabricated membrane to allow exchange of materials between parallel reactor and feeder channels. This engineered membrane facilitatesmore » the exchange of metabolites, energy, and inhibitory species, prolonging the CFPS reaction and increasing protein yield. Membrane permeability can be altered by plasma-enhanced chemical vapor deposition and atomic layer deposition to tune the exchange rate of small molecules. This allows for extended reaction times and improved yields. Further, the reaction product and higher molecular weight components of the transcription/translation machinery in the reactor channel can be retained. As a result, we show that the microscale bioreactor design produces higher protein yields than conventional tube-based batch formats, and that product yields can be dramatically improved by facilitating small molecule exchange within the dual-channel bioreactor.« less
In vivo burn diagnosis by camera-phone diffuse reflectance laser speckle detection.
Ragol, S; Remer, I; Shoham, Y; Hazan, S; Willenz, U; Sinelnikov, I; Dronov, V; Rosenberg, L; Bilenca, A
2016-01-01
Burn diagnosis using laser speckle light typically employs widefield illumination of the burn region to produce two-dimensional speckle patterns from light backscattered from the entire irradiated tissue volume. Analysis of speckle contrast in these time-integrated patterns can then provide information on burn severity. Here, by contrast, we use point illumination to generate diffuse reflectance laser speckle patterns of the burn. By examining spatiotemporal fluctuations in these time-integrated patterns along the radial direction from the incident point beam, we show the ability to distinguish partial-thickness burns in a porcine model in vivo within the first 24 hours post-burn. Furthermore, our findings suggest that time-integrated diffuse reflectance laser speckle can be useful for monitoring burn healing over time post-burn. Unlike conventional diffuse reflectance laser speckle detection systems that utilize scientific or industrial-grade cameras, our system is designed with a camera-phone, demonstrating the potential for burn diagnosis with a simple imager.
Interpolation of longitudinal shape and image data via optimal mass transport
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen
2014-03-01
Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.
In vivo burn diagnosis by camera-phone diffuse reflectance laser speckle detection
Ragol, S.; Remer, I.; Shoham, Y.; Hazan, S.; Willenz, U.; Sinelnikov, I.; Dronov, V.; Rosenberg, L.; Bilenca, A.
2015-01-01
Burn diagnosis using laser speckle light typically employs widefield illumination of the burn region to produce two-dimensional speckle patterns from light backscattered from the entire irradiated tissue volume. Analysis of speckle contrast in these time-integrated patterns can then provide information on burn severity. Here, by contrast, we use point illumination to generate diffuse reflectance laser speckle patterns of the burn. By examining spatiotemporal fluctuations in these time-integrated patterns along the radial direction from the incident point beam, we show the ability to distinguish partial-thickness burns in a porcine model in vivo within the first 24 hours post-burn. Furthermore, our findings suggest that time-integrated diffuse reflectance laser speckle can be useful for monitoring burn healing over time post-burn. Unlike conventional diffuse reflectance laser speckle detection systems that utilize scientific or industrial-grade cameras, our system is designed with a camera-phone, demonstrating the potential for burn diagnosis with a simple imager. PMID:26819831
Building a Lego wall: Sequential action selection.
Arnold, Amy; Wing, Alan M; Rotshtein, Pia
2017-05-01
The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
NASA Astrophysics Data System (ADS)
Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho
2009-05-01
This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.
Fast, axis-agnostic, dynamically summarized storage and retrieval for mass spectrometry data.
Handy, Kyle; Rosen, Jebediah; Gillan, André; Smith, Rob
2017-01-01
Mass spectrometry, a popular technique for elucidating the molecular contents of experimental samples, creates data sets comprised of millions of three-dimensional (m/z, retention time, intensity) data points that correspond to the types and quantities of analyzed molecules. Open and commercial MS data formats are arranged by retention time, creating latency when accessing data across multiple m/z. Existing MS storage and retrieval methods have been developed to overcome the limitations of retention time-based data formats, but do not provide certain features such as dynamic summarization and storage and retrieval of point meta-data (such as signal cluster membership), precluding efficient viewing applications and certain data-processing approaches. This manuscript describes MzTree, a spatial database designed to provide real-time storage and retrieval of dynamically summarized standard and augmented MS data with fast performance in both m/z and RT directions. Performance is reported on real data with comparisons against related published retrieval systems.
Two-stage, low noise advanced technology fan. 4: Aerodynamic final report
NASA Technical Reports Server (NTRS)
Harley, K. G.; Keenan, M. J.
1975-01-01
A two-stage research fan was tested to provide technology for designing a turbofan engine for an advanced, long range commercial transport having a cruise Mach number of 0.85 -0.9 and a noise level 20 EPNdB below current requirements. The fan design tip speed was 365.8m/sec (1200ft/sec);the hub/tip ratio was 0.4; the design pressure ratio was 1.9; and the design specific flow was 209.2 kg/sec/sq m(42.85lbm/sec/sq ft). Two fan-versions were tested: a baseline configuration, and an acoustically treated configuration with a sonic inlet device. The baseline version was tested with uniform inlet flow and with tip-radial and hub-radial inlet flow distortions. The baseline fan with uniform inlet flow attained an efficiency of 86.4% at design speed, but the stall margin was low. Tip-radial distortion increased stall margin 4 percentage points at design speed and reduced peak efficiency one percentage point. Hub-radial distortion decreased stall margin 4 percentage points at all speeds and reduced peak efficiency at design speed 8 percentage points. At design speed, the sonic inlet in the cruise position reduced stall margin one percentage point and efficiency 1.5 to 4.5 percentage points. The sonic inlet in the approach position reduced stall margin 2 percentage points.
[The added value of information summaries supporting clinical decisions at the point-of-care.
Banzi, Rita; González-Lorenzo, Marien; Kwag, Koren Hyogene; Bonovas, Stefanos; Moja, Lorenzo
2016-11-01
Evidence-based healthcare requires the integration of the best research evidence with clinical expertise and patients' values. International publishers are developing evidence-based information services and resources designed to overcome the difficulties in retrieving, assessing and updating medical information as well as to facilitate a rapid access to valid clinical knowledge. Point-of-care information summaries are defined as web-based medical compendia that are specifically designed to deliver pre-digested, rapidly accessible, comprehensive, and periodically updated information to health care providers. Their validity must be assessed against marketing claims that they are evidence-based. We periodically evaluate the content development processes of several international point-of-care information summaries. The number of these products has increased along with their quality. The last analysis done in 2014 identified 26 products and found that three of them (Best Practice, Dynamed e Uptodate) scored the highest across all evaluated dimensions (volume, quality of the editorial process and evidence-based methodology). Point-of-care information summaries as stand-alone products or integrated with other systems, are gaining ground to support clinical decisions. The choice of one product over another depends both on the properties of the service and the preference of users. However, even the most innovative information system must rely on transparent and valid contents. Individuals and institutions should regularly assess the value of point-of-care summaries as their quality changes rapidly over time.
Fok, Carlotta Ching Ting; Henry, David; Allen, James
2015-10-01
The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, Prevention Science. 2015) in this special section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small sample alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples.
Ting Fok, Carlotta Ching; Henry, David; Allen, James
2015-01-01
The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs, and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, 2015) in this Special Section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small samples alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples. PMID:26017633
State-Of High Brightness RF Photo-Injector Design
NASA Astrophysics Data System (ADS)
Ferrario, Massimo; Clendenin, Jym; Palmer, Dennis; Rosenzweig, James; Serafini, Luca
2000-04-01
The art of designing optimized high brightness electron RF Photo-Injectors has moved in the last decade from a cut and try procedure, guided by experimental experience and time consuming particle tracking simulations, up to a fast parameter space scanning, guided by recent analytical results and a fast running semi-analytical code, so to reach the optimum operating point which corresponds to maximum beam brightness. Scaling laws and the theory of invariant envelope provide to the designers excellent tools for a first parameters choice and the code HOMDYN, based on a multi-slice envelope description of the beam dynamics, is tailored to describe the space charge dominated dynamics of laminar beams in presence of time dependent space charge forces, giving rise to a very fast modeling capability for photo-injectors design. We report in this talk the results of a recent beam dynamics study, motivated by the need to redesign the LCLS photoinjector. During this work a new effective working point for a split RF photoinjector has been discovered by means of the previous mentioned approach. By a proper choice of rf gun and solenoid parameters, the emittance evolution shows a double minimum behavior in the drifting region. If the booster is located where the relative emittance maximum and the envelope waist occur, the second emittance minimum can be shifted at the booster exit and frozen at a very low level (0.3 mm-mrad for a 1 nC flat top bunch), to the extent that the invariant envelope matching conditions are satisfied.
Design and Simulation of a PID Controller for Motion Control Systems
NASA Astrophysics Data System (ADS)
Hassan Abdullahi, Zakariyya; Danzomo, Bashir Ahmed; Suleiman Abdullahi, Zainab
2018-04-01
Motion control system plays important role in many industrial applications among which are in robot system, missile launching, positioning systems etc. However, the performance requirement for these applications in terms of high accuracy, high speed, insignificant or no overshoot and robustness have generated continuous challenges in the field of motion control system design and implementation. To compensate this challenge, a PID controller was design using mathematical model of a DC motor based on classical root-locus approach. The reason for adopting root locus design is to remodel the closed-loop response by putting the closed-loop poles of the system at desired points. Adding poles and zeros to the initial open-loop transfer function through the controller provide a way to transform the root locus in order to place the closed-loop poles at the required points. This process can also be used for discrete-time models. The Advantages of root locus over other methods is that, it gives the better way of pinpointing the parameters and can easily predict the fulfilment of the whole system. The controller performance was simulated using MATLAB code and a reasonable degree of accuracy was obtained. Implementation of the proposed model was conducted using-Simulink and the result obtained shows that the PID controller met the transient performance specifications with both settling time and overshoot less than 0.1s and 5% respectively. In terms of steady state error, the PID controller gave good response for both step input and ramp.
Dynamic fMRI of a decision-making task
NASA Astrophysics Data System (ADS)
Singh, Manbir; Sungkarat, Witaya
2008-03-01
A novel fMRI technique has been developed to capture the dynamics of the evolution of brain activity during complex tasks such as those designed to evaluate the neural basis of decision-making under different situations. A task called the Iowa Gambling Task was used as an example. Six normal human volunteers were studied. The task was presented inside a 3T MRI and a dynamic fMRI study of the approximately 2s period between the beginning and end of the decision-making period was conducted by employing a series of reference functions, separated by 200 ms, designed to capture activation at different time-points within this period. As decision-making culminates with a button-press, the timing of the button press was chosen as the reference (t=0) and corresponding reference functions were shifted backward in steps of 200ms from this point up to the time when motor activity from the previous button press became predominant. SPM was used to realign, high-pass filter (cutoff 200s), normalize to the Montreal Neurological Institute (MNI) Template using a 12 parameter affine/non-linear transformation, 8mm Gaussian smoothing, and event-related General Linear Model analysis for each of the shifted reference functions. The t-score of each activated voxel was then examined to find its peaking time. A random effect analysis (p<0.05) showed prefrontal, parietal and bi-lateral hippocampal activation peaking at different times during the decision making period in the n=6 group study.
High-Temperature Gas-Cooled Test Reactor Point Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-04-01
A point design has been developed for a 200 MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched UCO fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technological readiness level, licensing approach and costs.
Tex, David M; Nakamura, Tetsuya; Imaizumi, Mitsuru; Ohshima, Takeshi; Kanemitsu, Yoshihiko
2017-05-16
Tandem solar cells are suited for space applications due to their high performance, but also have to be designed in such a way to minimize influence of degradation by the high energy particle flux in space. The analysis of the subcell performance is crucial to understand the device physics and achieve optimized designs of tandem solar cells. Here, the radiation-induced damage of inverted grown InGaP/GaAs/InGaAs triple-junction solar cells for various electron fluences are characterized using conventional current-voltage (I-V) measurements and time-resolved photoluminescence (PL). The conversion efficiencies of the entire device before and after damage are measured with I-V curves and compared with the efficiencies predicted from the time-resolved method. Using the time-resolved data the change in the carrier dynamics in the subcells can be discussed. Our optical method allows to predict the absolute electrical conversion efficiency of the device with an accuracy of better than 5%. While both InGaP and GaAs subcells suffered from significant material degradation, the performance loss of the total device can be completely ascribed to the damage in the GaAs subcell. This points out the importance of high internal electric fields at the operating point.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments.
Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia
2016-09-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.
GIOTTO's antenna de-spin mechanism: Ots lubrication and thermal vacuum performance
NASA Technical Reports Server (NTRS)
Todd, M. J.; Parker, K.
1987-01-01
Except in the near Earth phase of GIOTTO's mission to Comet Halley, the HGA (high gain antenna) on board GIOTTO was the only designed means of up/down communications. The spacecraft spin stabilization required that the HGA be despun at the same rotational rate of nominally 15 rpm in order to keep the HGA pointing accurately to a Earth. A dual servomotor despin mechanism was designed and built by SEP of France for this purpose. The expected thermal environment suggested that dry lubrication was preferable to wet for the ball bearings but there existed no relevant data on the torque noise spectrum of candidate solid lubricants. Therefore ad hoc torque noise tests were run with two solid lubricants: ion plated lead film plus lead bronze cage (retainer) and a PTFE composite cage only. The lead lubrication showed the better spectrum up to the mission lifetime point so it was selected for continued test over some 20 times the Halley mission life, with periodic torque spectrum monitoring. The spectrum remained well within the pointing error budget over the 100 million revolutions covered.
Geoscience laser altimeter system-stellar reference system
NASA Astrophysics Data System (ADS)
Millar, Pamela S.; Sirota, J. Marcos
1998-01-01
GLAS is an EOS space-based laser altimeter being developed to profile the height of the Earth's ice sheets with ~15 cm single shot accuracy from space under NASA's Mission to Planet Earth (MTPE). The primary science goal of GLAS is to determine if the ice sheets are increasing or diminishing for climate change modeling. This is achieved by measuring the ice sheet heights over Greenland and Antarctica to 1.5 cm/yr over 100 km×100 km areas by crossover analysis (Zwally 1994). This measurement performance requires the instrument to determine the pointing of the laser beam to ~5 urad (1 arcsecond), 1-sigma, with respect to the inertial reference frame. The GLAS design incorporates a stellar reference system (SRS) to relate the laser beam pointing angle to the star field with this accuracy. This is the first time a spaceborne laser altimeter is measuring pointing to such high accuracy. The design for the stellar reference system combines an attitude determination system (ADS) with a laser reference system (LRS) to meet this requirement. The SRS approach and expected performance are described in this paper.
Slew maneuvers on the SCOLE Laboratory Facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.
1987-01-01
The Spacecraft Control Laboratory Experiment (SCOLE) was conceived to provide a physical test bed for the investigation of control techniques for large flexible spacecraft. The control problems studied are slewing maneuvers and pointing operations. The slew is defined as a minimum time maneuver to bring the antenna line-of-sight (LOS) pointing to within an error limit of the pointing target. The second objective is to rotate about the LOS within the 0.02 degree error limit. The SCOLE problem is defined as two design challenges: control laws for a mathematical model of a large antenna attached to the Space Shuttle by a long flexible mast; and a control scheme on a laboratory representation of the structure modelled on the control laws. Control sensors and actuators are typical of those which the control designer would have to deal with on an actual spacecraft. Computational facilities consist of microcomputer based central processing units with appropriate analog interfaces for implementation of the primary control system, and the attitude estimation algorithm. Preliminary results of some slewing control experiments are given.
A Quartz Crystal Microbalance dew point sensor without frequency measurement.
Wang, Guohua; Zhang, Weishuo; Wang, Shuo; Sun, Jinglin
2014-11-01
This work deals with the design of a dew point sensor based on Quartz Crystal Microbalance (QCM) without measuring the frequency. This idea is inspired by the fact that the Colpitts oscillation circuit will stop oscillating when the QCM works in the liquid media. The quartz crystal and the electrode are designed through the finite element simulation and the stop oscillating experiment is conducted to verify the sensibility. Moreover, the measurement result is calibrated to approach the true value. At last a series of dew points at the same temperature is measured with the designed sensor. Results show that the designed dew point sensor is able to detect the dew point with the proper accuracy.
A quartz crystal microbalance dew point sensor without frequency measurement
NASA Astrophysics Data System (ADS)
Wang, Guohua; Zhang, Weishuo; Wang, Shuo; Sun, Jinglin
2014-11-01
This work deals with the design of a dew point sensor based on Quartz Crystal Microbalance (QCM) without measuring the frequency. This idea is inspired by the fact that the Colpitts oscillation circuit will stop oscillating when the QCM works in the liquid media. The quartz crystal and the electrode are designed through the finite element simulation and the stop oscillating experiment is conducted to verify the sensibility. Moreover, the measurement result is calibrated to approach the true value. At last a series of dew points at the same temperature is measured with the designed sensor. Results show that the designed dew point sensor is able to detect the dew point with the proper accuracy.
Extending the boundaries of reverse engineering
NASA Astrophysics Data System (ADS)
Lawrie, Chris
2002-04-01
In today's market place the potential of Reverse Engineering as a time compression tool is commonly lost under its traditional definition. The term Reverse Engineering was coined way back at the advent of CMM machines and 3D CAD systems to describe the process of fitting surfaces to captured point data. Since these early beginnings, downstream hardware scanning and digitising systems have evolved in parallel with an upstream demand, greatly increasing the potential of a point cloud data set within engineering design and manufacturing processes. The paper will discuss the issues surrounding Reverse Engineering at the turn of the millennium.
Automated Parameter Studies Using a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosimis, Michael J.; Nemec, Marian
2004-01-01
Computational Fluid Dynamics (CFD) is now routinely used to analyze isolated points in a design space by performing steady-state computations at fixed flight conditions (Mach number, angle of attack, sideslip), for a fixed geometric configuration of interest. This "point analysis" provides detailed information about the flowfield, which aides an engineer in understanding, or correcting, a design. A point analysis is typically performed using high fidelity methods at a handful of critical design points, e.g. a cruise or landing configuration, or a sample of points along a flight trajectory.
Jern, Patrick; Hakala, Outi; Kärnä, Antti; Gunst, Annika
2018-04-01
The aim of the present study was to investigate how women's tendency to pretend orgasm during intercourse is associated with orgasm function and intercourse-related pain, using a longitudinal design where temporal stability and possible causal relationships could be modeled. The study sample consisted of 1421 Finnish women who had participated in large-scale population-based data collections conducted at two time points 7 years apart. Pretending orgasm was assessed for the past 4 weeks, and orgasm function and pain were assessed using the Female Sexual Function Index for the past 4 weeks. Associations were also computed separately in three groups of women based on relationship status. Pretending orgasm was considerably variable over time, with 34% of the women having pretended orgasm a few times or more at least at one time point, and 11% having done so at both time points. Initial bivariate correlations revealed associations between pretending orgasm and orgasm problems within and across time, whereas associations with pain were more ambiguous. However, we found no support in the path model for the leading hypotheses that pretending orgasms would predict pain or orgasm problems over a long period of time, or that pain or orgasm problems would predict pretending orgasm. The strongest predictor of future pretending in our model was previous pretending (R 2 = .14). Relationship status did not seem to affect pretending orgasm in any major way.
NASA Technical Reports Server (NTRS)
Robertson, Brent; Sabelhaus, Phil; Mendenhall, Todd; Fesq, Lorraine
1998-01-01
On December 13th 1998, the Total Ozone Mapping Spectrometer - Earth Probe (TOMS-EP) spacecraft experienced a Single Event Upset which caused the system to reconfigure and enter a Safe Mode. This incident occurred two and a half years after the launch of the spacecraft which was designed for a two year life. A combination of factors, including changes in component behavior due to age and extended use, very unfortunate initial conditions and the safe mode processing logic prevented the spacecraft from entering its nominal long term storage mode. The spacecraft remained in a high fuel consumption mode designed for temporary use. By the time the onboard fuel was exhausted, the spacecraft was Sun pointing in a high rate flat spin. Although the uncontrolled spacecraft was initially in a power and thermal safe orientation, it would not stay in this state indefinitely due to a slow precession of its momentum vector. A recovery team was immediately assembled to determine if there was time to develop a method of despinning the vehicle and return it to normal science data collection. A three stage plan was developed that used the onboard magnetic torque rods as actuators. The first stage was designed to reduce the high spin rate to within the linear range of the gyros. The second stage transitioned the spacecraft from sun pointing to orbit reference pointing. The final stage returned the spacecraft to normal science operation. The entire recovery scenario was simulated with a wide range of initial conditions to establish the expected behavior. The recovery sequence was started on December 28th 1998 and completed by December 31st. TOMS-EP was successfully returned to science operations by the beginning of 1999. This paper describes the TOMS-EP Safe Mode design and the factors which led to the spacecraft anomaly and loss of fuel. The recovery and simulation efforts are described. Flight data are presented which show the performance of the spacecraft during its return to science. Finally, lessons learned are presented.
2008-01-01
Objective To compare optical coherence tomography (OCT)-measured retinal thickness and visual acuity in eyes with diabetic macular edema (DME) both before and after macular laser photocoagulation. Design Cross-sectional and longitudinal study. Participants 210 subjects (251 eyes) with DME enrolled in a randomized clinical trial of laser techniques. Methods Retinal thickness was measured with OCT and visual acuity was measured with the electronic-ETDRS procedure. Main Outcome Measures OCT-measured center point thickness and visual acuity Results The correlation coefficients for visual acuity versus OCT center point thickness were 0.52 at baseline and 0.49, 0.36, and 0.38 at 3.5, 8, and 12 months post-laser photocoagulation. The slope of the best fit line to the baseline data was approximately 4.4 letters (95% C.I.: 3.5, 5.3) better visual acuity for every 100 microns decrease in center point thickness at baseline with no important difference at follow-up visits. Approximately one-third of the variation in visual acuity could be predicted by a linear regression model that incorporated OCT center point thickness, age, hemoglobin A1C, and severity of fluorescein leakage in the center and inner subfields. The correlation between change in visual acuity and change in OCT center point thickening 3.5 months after laser treatment was 0.44 with no important difference at the other follow-up times. A subset of eyes showed paradoxical improvements in visual acuity with increased center point thickening (7–17% at the three time points) or paradoxical worsening of visual acuity with a decrease in center point thickening (18%–26% at the three time points). Conclusions There is modest correlation between OCT-measured center point thickness and visual acuity, and modest correlation of changes in retinal thickening and visual acuity following focal laser treatment for DME. However, a wide range of visual acuity may be observed for a given degree of retinal edema and paradoxical increases in center point thickening with increases in visual acuity as well as paradoxical decreases in center point thickening with decreases in visual acuity were not uncommon. Thus, although OCT measurements of retinal thickness represent an important tool in clinical evaluation, they cannot reliably substitute as a surrogate for visual acuity at a given point in time. This study does not address whether short-term changes on OCT are predictive of long-term effects on visual acuity. PMID:17123615
Energy-saving scheme based on downstream packet scheduling in ethernet passive optical networks
NASA Astrophysics Data System (ADS)
Zhang, Lincong; Liu, Yejun; Guo, Lei; Gong, Xiaoxue
2013-03-01
With increasing network sizes, the energy consumption of Passive Optical Networks (PONs) has grown significantly. Therefore, it is important to design effective energy-saving schemes in PONs. Generally, energy-saving schemes have focused on sleeping the low-loaded Optical Network Units (ONUs), which tends to bring large packet delays. Further, the traditional ONU sleep modes are not capable of sleeping the transmitter and receiver independently, though they are not required to transmit or receive packets. Clearly, this approach contributes to wasted energy. Thus, in this paper, we propose an Energy-Saving scheme that is based on downstream Packet Scheduling (ESPS) in Ethernet PON (EPON). First, we design both an algorithm and a rule for downstream packet scheduling at the inter- and intra-ONU levels, respectively, to reduce the downstream packet delay. After that, we propose a hybrid sleep mode that contains not only ONU deep sleep mode but also independent sleep modes for the transmitter and the receiver. This ensures that the energy consumed by the ONUs is minimal. To realize the hybrid sleep mode, a modified GATE control message is designed that involves 10 time points for sleep processes. In ESPS, the 10 time points are calculated according to the allocated bandwidths in both the upstream and the downstream. The simulation results show that ESPS outperforms traditional Upstream Centric Scheduling (UCS) scheme in terms of energy consumption and the average delay for both real-time and non-real-time packets downstream. The simulation results also show that the average energy consumption of each ONU in larger-sized networks is less than that in smaller-sized networks; hence, our ESPS is better suited for larger-sized networks.
Observed Measures of Negative Parenting Predict Brain Development during Adolescence.
Whittle, Sarah; Vijayakumar, Nandita; Dennison, Meg; Schwartz, Orli; Simmons, Julian G; Sheeber, Lisa; Allen, Nicholas B
2016-01-01
Limited attention has been directed toward the influence of non-abusive parenting behaviour on brain structure in adolescents. It has been suggested that environmental influences during this period are likely to impact the way that the brain develops over time. The aim of this study was to investigate the association between aggressive and positive parenting behaviors on brain development from early to late adolescence, and in turn, psychological and academic functioning during late adolescence, using a multi-wave longitudinal design. Three hundred and sixty seven magnetic resonance imaging (MRI) scans were obtained over three time points from 166 adolescents (11-20 years). At the first time point, observed measures of maternal aggressive and positive behaviors were obtained. At the final time point, measures of psychological and academic functioning were obtained. Results indicated that a higher frequency of maternal aggressive behavior was associated with alterations in the development of right superior frontal and lateral parietal cortical thickness, and of nucleus accumbens volume, in males. Development of the superior frontal cortex in males mediated the relationship between maternal aggressive behaviour and measures of late adolescent functioning. We suggest that our results support an association between negative parenting and adolescent functioning, which may be mediated by immature or delayed brain maturation.
Observed Measures of Negative Parenting Predict Brain Development during Adolescence
Whittle, Sarah; Vijayakumar, Nandita; Dennison, Meg; Schwartz, Orli; Simmons, Julian G.; Sheeber, Lisa; Allen, Nicholas B.
2016-01-01
Limited attention has been directed toward the influence of non-abusive parenting behaviour on brain structure in adolescents. It has been suggested that environmental influences during this period are likely to impact the way that the brain develops over time. The aim of this study was to investigate the association between aggressive and positive parenting behaviors on brain development from early to late adolescence, and in turn, psychological and academic functioning during late adolescence, using a multi-wave longitudinal design. Three hundred and sixty seven magnetic resonance imaging (MRI) scans were obtained over three time points from 166 adolescents (11–20 years). At the first time point, observed measures of maternal aggressive and positive behaviors were obtained. At the final time point, measures of psychological and academic functioning were obtained. Results indicated that a higher frequency of maternal aggressive behavior was associated with alterations in the development of right superior frontal and lateral parietal cortical thickness, and of nucleus accumbens volume, in males. Development of the superior frontal cortex in males mediated the relationship between maternal aggressive behaviour and measures of late adolescent functioning. We suggest that our results support an association between negative parenting and adolescent functioning, which may be mediated by immature or delayed brain maturation. PMID:26824348
Jae-Won Lee; Rita C.L.B. Rodrigues; Thomas W. Jeffries
2009-01-01
Response surface methodology was used to evaluate optimal time, temperature and oxalic acid concentration for simultaneous saccharification and fermentation (SSF) of corncob particles by Pichia stipitis CBS 6054. Fifteen different conditions for pretreatment were examined in a 23 full factorial design with six axial points. Temperatures ranged from 132 to 180º...
ERIC Educational Resources Information Center
Rodriguez, David B.
2013-01-01
Employing a non-experimental, ex-post facto design, the study examined the relationship of student demographic information and internal characteristics identified from the Learning and Study Strategies Inventory (LASSI) to student persistence, grade point average, and academic success. Cognitive Learning Theory (CLT), which focuses on the internal…
15 CFR Supplement No. 7 to Part 760 - Interpretation
Code of Federal Regulations, 2011 CFR
2011-01-01
... inventory or may re-ship them to other markets (the United States person may not return them to the original... Department wishes to point out that, when faced with a boycotting country's refusal to permit entry of the... or origin at the time of their entry into the boycotting country by (a) uniqueness of design or...
15 CFR Supplement No. 7 to Part 760 - Interpretation
Code of Federal Regulations, 2012 CFR
2012-01-01
... inventory or may re-ship them to other markets (the United States person may not return them to the original... Department wishes to point out that, when faced with a boycotting country's refusal to permit entry of the... or origin at the time of their entry into the boycotting country by (a) uniqueness of design or...
15 CFR Supplement No. 7 to Part 760 - Interpretation
Code of Federal Regulations, 2010 CFR
2010-01-01
... inventory or may re-ship them to other markets (the United States person may not return them to the original... Department wishes to point out that, when faced with a boycotting country's refusal to permit entry of the... or origin at the time of their entry into the boycotting country by (a) uniqueness of design or...
15 CFR Supplement No. 7 to Part 760 - Interpretation
Code of Federal Regulations, 2013 CFR
2013-01-01
... inventory or may re-ship them to other markets (the United States person may not return them to the original... Department wishes to point out that, when faced with a boycotting country's refusal to permit entry of the... or origin at the time of their entry into the boycotting country by (a) uniqueness of design or...
The Impact of a Multi-Component Physical Activity Programme in Low-Income Elementary Schools
ERIC Educational Resources Information Center
Massey, William V.; Stellino, Megan B.; Holliday, Megan; Godbersen, Travis; Rodia, Rachel; Kucher, Greta; Wilkison, Megan
2017-01-01
Objective: To identify the effects of a structured and multifaceted physical activity and recess intervention on student and adult behaviour in school. Design: Mixed-methods and community-based participatory approach. Setting: Large, urban, low-income school district in the USA. Methods: Data were collected at three time points over a 1-year…
What You Can--and Can't--Do with Three-Wave Panel Data
ERIC Educational Resources Information Center
Vaisey, Stephen; Miles, Andrew
2017-01-01
The recent change in the general social survey (GSS) to a rotating panel design is a landmark development for social scientists. Sociological methodologists have argued that fixed-effects (FE) models are generally the best starting point for analyzing panel data because they allow analysts to control for unobserved time-constant heterogeneity. We…
Grandmothers and Caregiving to Grandchildren: Continuity, Change, and Outcomes over 24 Months
ERIC Educational Resources Information Center
Musil, Carol M.; Gordon, Nahida L.; Warner, Camille B.; Zauszniewski, Jaclene A.; Standing, Theresa; Wykle, May
2011-01-01
Purpose: Transitions in caregiving, such as becoming a primary caregiver to grandchildren or having adult children and grandchildren move in or out, may affect the well-being of the grandmother. Design and Methods: This report describes caregiving patterns at 3 time points over 24 months in a sample of 485 Ohio grandmothers and examines the…
Using Time-Compression to Make Multimedia Learning More Efficient: Current Research and Practice
ERIC Educational Resources Information Center
Pastore, Raymond; Ritzhaupt, Albert D.
2015-01-01
It is now common practice for instructional designers to incorporate digitally recorded lectures for Podcasts (e.g., iTunes University), voice-over presentations (e.g., PowerPoint), animated screen captures with narration (e.g., Camtasia), and other various learning objects with digital audio in the instructional method. As a result, learners are…
NLS Handbook, 2005. National Longitudinal Surveys
ERIC Educational Resources Information Center
Bureau of Labor Statistics, 2006
2006-01-01
The National Longitudinal Surveys (NLS), sponsored by the U.S. Bureau of Labor Statistics (BLS), are a set of surveys designed to gather information at multiple points in time on the labor market experiences of groups of men and women. Each of the cohorts has been selected to represent all people living in the United States at the initial…
ERIC Educational Resources Information Center
Huang, Chiungjung
2011-01-01
As few studies utilized longitudinal design to examine the development of Internet use for communication, the purpose of this study was to examine the effects of gender and initial Internet use for communication on subsequent use. The study sample was 280 undergraduate students who were assessed at five time points. Hierarchical linear models were…
Teaching Strategies & Techniques for Adjunct Faculty. Third Edition. Higher Education Series.
ERIC Educational Resources Information Center
Greive, Donald
This booklet presents teaching strategies and techniques in a quick reference format. It was designed specifically to assist adjunct and part-time faculty, who have careers outside of education, to efficiently grasp many of the concepts necessary for effective teaching. Included are a checklist of points to review prior to beginning a teaching…
The Future of Marketing Scholarship: Recruiting for Marketing Doctoral Programs
ERIC Educational Resources Information Center
Davis, Donna F.; McCarthy, Teresa M.
2005-01-01
As demand for business education is rising, the production of business doctorates continues to fall. Between 1995 and 2001, new business doctorates declined 18%, dropping to the lowest point since 1987. In the same time frame, new marketing doctorates dropped by 32%. This article reports the results of a study designed to (1) assess enrollment…
ERIC Educational Resources Information Center
Heric, Matthew; Carter, Jenn
2011-01-01
Cognitive readiness (CR) and performance for operational time-critical environments are continuing points of focus for military and academic communities. In response to this need, we designed an open source interactive CR assessment application as a highly adaptive and efficient open source testing administration and analysis tool. It is capable…
NASA Technical Reports Server (NTRS)
Kavaya, Michael J.; Emmitt, G. David; Frehlich, Rod G.; Amzajerdian, Farzin; Singh, Upendra N.
2002-01-01
An end-to-end point design, including lidar, orbit, scanning, atmospheric, and data processing parameters, for space-based global profiling of atmospheric wind will be presented. The point design attempts to match the recent NASA/NOAA draft science requirements for wind measurement.
Eliasson, Arne H; Eliasson, Arn H; Lettieri, Christopher J
2017-05-01
To inform the design of a sleep improvement program for college students, we assessed academic performance, sleep habits, study hours, and extracurricular time, hypothesizing that there would be differences between US-born and foreign-born students. Questionnaires queried participants on bedtimes, wake times, nap frequency, differences in weekday and weekend sleep habits, study hours, grade point average, time spent at paid employment, and other extracurricular activities. Comparisons were made using chi square tests for categorical data and t tests for continuous data between US-born and foreign-born students. Of 120 participants (55 % women) with racial diversity (49 whites, 18 blacks, 26 Hispanics, 14 Asians, and 13 other), 49 (41 %) were foreign-born. Comparisons between US-born and foreign-born students showed no differences in average age or gender though US-born had more whites. There were no differences between US-born and foreign-born students for grade point averages, weekday bedtimes, wake times, or total sleep times. However, US-born students averaged 50 min less study time per day (p = 0.01), had almost 9 h less paid employment per week (14.5 vs 23.4 h per week, p = 0.001), and stayed up to socialize more frequently (63 vs 43 %, p = 0.03). Foreign-born students awakened an hour earlier and averaged 40 min less sleep per night on weekends. Cultural differences among college students have a profound effect on sleep habits, study hours, and extracurricular time. The design of a sleep improvement program targeting a population with diverse cultural backgrounds must factor in such behavioral variations in order to have relevance and impact.
NASA Technical Reports Server (NTRS)
Harris, H. M.; Bergam, M. J.; Kim, S. L.; Smith, E. A.
1987-01-01
Shuttle Mission Design and Operations Software (SMDOS) assists in design and operation of missions involving spacecraft in low orbits around Earth by providing orbital and graphics information. SMDOS performs following five functions: display two world and two polar maps or any user-defined window 5 degrees high in latitude by 5 degrees wide in longitude in one of eight standard projections; designate Earth sites by points or polygon shapes; plot spacecraft ground track with 1-min demarcation lines; display, by means of different colors, availability of Tracking and Data Relay Satellite to Shuttle; and calculate available times and orbits to view particular site, and corresponding look angles. SMDOS written in Laboratory Micro-systems FORTH (1979 standard)
Transfers between libration-point orbits in the elliptic restricted problem
NASA Astrophysics Data System (ADS)
Hiday-Johnston, L. A.; Howell, K. C.
1994-04-01
A strategy is formulated to design optimal time-fixed impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L1 libration point of the Sun-Earth/Moon barycenter system. The adjoint equation in terms of rotating coordinates in the elliptic restricted three-body problem is shown to be of a distinctly different form from that obtained in the analysis of trajectories in the two-body problem. Also, the necessary conditions for a time-fixed two-impulse transfer to be optimal are stated in terms of the primer vector. Primer vector theory is then extended to nonoptimal impulsive trajectories in order to establish a criterion whereby the addition of an interior impulse reduces total fuel expenditure. The necessary conditions for the local optimality of a transfer containing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. Determination of location, orientation, and magnitude of each additional impulse is accomplished by the unconstrained minimization of the cost function using a multivariable search method. Results indicate that substantial savings in fuel can be achieved by the addition of interior impulsive maneuvers on transfers between libration-point orbits.
Theoretical relation between halo current-plasma energy displacement/deformation in EAST
NASA Astrophysics Data System (ADS)
Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen
2018-04-01
In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak.
Real time monitoring system used in route planning for the electric vehicle
NASA Astrophysics Data System (ADS)
Ionescu, LM; Mazare, A.; Serban, G.; Ionita, S.
2017-10-01
The electric vehicle is a new consumer of electricity that is becoming more and more widespread. Under these circumstances, new strategies for optimizing power consumption and increasing vehicle autonomy must be designed. These must include route planning along with consumption, fuelling points and points of interest. The hardware and software solution proposed by us allows: non-invasive monitoring of power consumption, energy autonomy - it does not add any extra consumption, data transmission to a server and data fusion with the route, the points of interest of the route and the power supply points. As a result: an optimal route planning service will be provided to the driver, considering the route, the requirements of the electric vehicle and the consumer profile. The solution can be easily installed on any type of electric car - it does not involve any intervention on the equipment.
Investigation of the Parameters of Sealed Triple-Point Cells for Cryogenic Gases
NASA Astrophysics Data System (ADS)
Fellmuth, B.; Wolber, L.
2011-01-01
An overview of the parameters of a large number of sealed triple-point cells for the cryogenic gases hydrogen, oxygen, neon, and argon is given that have been determined within the framework of an international star intercomparison to optimize the measurement of melting curves as well as to establish complete and reliable uncertainty budgets for the realization of temperature fixed points. Special emphasis is given to the question, whether the parameters are primarily influenced by the cell design or the properties of the fixed-point samples. For explaining surprisingly large periods of the thermal recovery after the heat pulses of the intermittent heating through the melting range, a simple model is developed based on a newly defined heat-capacity equivalent, which considers the heat of fusion and a melting-temperature inhomogeneity. The analysis of the recovery using a graded set of exponential functions containing different time constants is also explained in detail.
Dynamically encircling an exceptional point for asymmetric mode switching.
Doppler, Jörg; Mailybaev, Alexei A; Böhm, Julian; Kuhl, Ulrich; Girschik, Adrian; Libisch, Florian; Milburn, Thomas J; Rabl, Peter; Moiseyev, Nimrod; Rotter, Stefan
2016-09-01
Physical systems with loss or gain have resonant modes that decay or grow exponentially with time. Whenever two such modes coalesce both in their resonant frequency and their rate of decay or growth, an 'exceptional point' occurs, giving rise to fascinating phenomena that defy our physical intuition. Particularly intriguing behaviour is predicted to appear when an exceptional point is encircled sufficiently slowly, such as a state-flip or the accumulation of a geometric phase. The topological structure of exceptional points has been experimentally explored, but a full dynamical encircling of such a point and the associated breakdown of adiabaticity have remained out of reach of measurement. Here we demonstrate that a dynamical encircling of an exceptional point is analogous to the scattering through a two-mode waveguide with suitably designed boundaries and losses. We present experimental results from a corresponding waveguide structure that steers incoming waves around an exceptional point during the transmission process. In this way, mode transitions are induced that transform this device into a robust and asymmetric switch between different waveguide modes. This work will enable the exploration of exceptional point physics in system control and state transfer schemes at the crossroads between fundamental research and practical applications.
Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.
Dai, Tianjiao; Shete, Sanjay
2016-08-30
In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.
Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.
Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun
2014-06-13
In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.
Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind
Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun
2014-01-01
In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864
Interactive-cut: Real-time feedback segmentation for translational research.
Egger, Jan; Lüddemann, Tobias; Schwarzenberg, Robert; Freisleben, Bernd; Nimsky, Christopher
2014-06-01
In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
A spline-based approach for computing spatial impulse responses.
Ellis, Michael A; Guenther, Drake; Walker, William F
2007-05-01
Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.
Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment
Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon
2013-01-01
As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970
A 34-meter VAWT (Vertical Axis Wind Turbine) point design
NASA Astrophysics Data System (ADS)
Ashwill, T. D.; Berg, D. E.; Dodd, H. M.; Rumsey, M. A.; Sutherland, H. J.; Veers, P. S.
The Wind Energy Division at Sandia National Laboratories recently completed a point design based on the 34-m Vertical Axis Wind Turbine (VAWT) Test Bed. The 34-m Test Bed research machine incorporates several innovations that improve Darrieus technology, including increased energy production, over previous machines. The point design differs minimally from the Test Bed; but by removing research-related items, its estimated cost is substantially reduced. The point design is a first step towards a Test-Bed-based commercial machine that would be competitive with conventional sources of power in the mid-1990s.
Identification of the ideal clutter metric to predict time dependence of human visual search
NASA Astrophysics Data System (ADS)
Cartier, Joan F.; Hsu, David H.
1995-05-01
The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.
Demonstration of a Balloon Borne Arc-second Pointer Design
NASA Astrophysics Data System (ADS)
Deweese, K.; Ward, P.
Many designs for utilizing stratospheric balloons as low-cost platforms on which to conduct space science experiments have been proposed throughout the years A major hurdle in extending the range of experiments for which these vehicles are useful has been the imposition of the gondola dynamics on the accuracy with which an instrument can be kept pointed at a celestial target A significant number of scientists have sought the ability to point their instruments with jitter in the arc-second range This paper presents the design and analysis of a stratospheric balloon borne pointing system that is able to meet this requirement The test results of a demonstration prototype of the design with similar ability are also presented Discussion of a high fidelity controller simulation for design analysis is presented The flexibility of the flight train is represented through generalized modal analysis A multiple controller scheme is utilized for coarse and fine pointing Coarse azimuth pointing is accomplished by an established pointing system with extensive flight history residing above the gondola structure A pitch-yaw gimbal mount is used for fine pointing providing orthogonal axes when nominally on target Fine pointing actuation is from direct drive dc motors eliminating backlash problems An analysis of friction nonlinearities and a demonstration of the necessity in eliminating static friction are provided A unique bearing hub design is introduced that eliminates static friction from the system dynamics A control scheme involving linear
Liu, Chong; Ji, Lailin; Yang, Lin; Zhao, Dongfeng; Zhang, Yanfeng; Liu, Dong; Zhu, Baoqiang; Lin, Zunqi
2016-04-01
In order to obtain the intensity distribution of a 351 nm focal spot and smoothing by spectral dispersion (SSD) focal plane profile of a SGII-upgraded facility, a type of off-axis imaging system with three spherical mirrors, suitable for a finite distance source point to be imaged near the diffraction limit has been designed. The quality factor of the image system is 1.6 times of the diffraction limit tested by a 1053 nm point source. Because of the absence of a 351 nm point source, we can use a Collins diffraction imaging integral with respect to λ=351 nm, corresponding to a quality factor that is 3.8 times the diffraction limit at 351 nm. The calibration results show that at least the range of ±10 mrad of view field angle and ±50 mm along the axial direction around the optimum object distance can be satisfied with near diffraction limited image that is consistent with the design value. Using this image system, the No. 2 beam of the SGII-upgraded facility has been tested. The test result of the focal spot of final optics assembly (FOA) at 351 nm indicates that about 80% of energy is encompassed in 14.1 times the diffraction limit, while the output energy of the No. 2 beam is 908 J at 1053 nm. According to convolution theorem, the true value of a 351 nm focal spot of FOA is about 12 times the diffraction limit because of the influence of the quality factor. Further experimental studies indicate that the RMS value along the smoothing direction is less than 15.98% in the SSD spot test experiment. Computer simulations show that the quality factor of the image system used in the experiment has almost no effect on the SSD focal spot test. The image system can remarkably distort the SSD focal spot distribution under the circumstance of the quality factor 15 times worse than the diffraction limit. The distorted image shows a steep slope in the contour of the SSD focal spot along the smoothing direction that otherwise has a relatively flat top region around the focal spot center.
The development of an airborne information management system for flight test
NASA Technical Reports Server (NTRS)
Bever, Glenn A.
1992-01-01
An airborne information management system is being developed at the NASA Dryden Flight Research Facility. This system will improve the state of the art in management data acquisition on-board research aircraft. The design centers around highly distributable, high-speed microprocessors that allow data compression, digital filtering, and real-time analysis. This paper describes the areas of applicability, approach to developing the system, potential for trouble areas, and reasons for this development activity. System architecture (including the salient points of what makes it unique), design philosophy, and tradeoff issues are also discussed.
NASA Technical Reports Server (NTRS)
Englander, Jacob; Vavrina, Matthew
2015-01-01
The customer (scientist or project manager) most often does not want just one point solution to the mission design problem Instead, an exploration of a multi-objective trade space is required. For a typical main-belt asteroid mission the customer might wish to see the trade-space of: Launch date vs. Flight time vs. Deliverable mass, while varying the destination asteroid, planetary flybys, launch year, etcetera. To address this question we use a multi-objective discrete outer-loop which defines many single objective real-valued inner-loop problems.
A Conceptual Design For A Spaceborne 3D Imaging Lidar
NASA Technical Reports Server (NTRS)
Degnan, John J.; Smith, David E. (Technical Monitor)
2002-01-01
First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.
Sastre, Salvador; Fernández Torija, Carlos; Carbonell, Gregoria; Rodríguez Martín, José Antonio; Beltrán, Eulalia María; González-Doncel, Miguel
2018-02-01
A diet fortified with 2,2', 4,4'-tetrabromodiphenyl ether (BDE-47: 0, 10, 100, and 1000 ng/g) was dosed to 4-7-day-old post-hatch medaka fish for 40 days to evaluate the effects on the swimming activity of fish using a miniaturized swimming flume. Chlorpyrifos (CF)-exposed fish were selected as the positive control to assess the validity and sensitivity of the behavioral findings. After 20 and 40 days of exposure, the locomotor activity was analyzed for 6 min in a flume section (arena). The CF positive control for each time point were fish exposed to 50 ng CF/ml for 48 h. Swimming patterns, presented as two-dimensional heat maps of fish movement and positioning, were obtained by geostatistical analyses. The heat maps of the control groups at time point 20 revealed visually comparable swimming patterns to those of the BDE-47-treated groups. For the comparative fish positioning analysis, both the arenas were divided into 15 proportional areas. No statistical differences were found between residence times in the areas from the control groups and those from the BDE-47-treated groups. At time point 40, the heat map overall patterns of the control groups differed visually from that of the 100-ng BDE-47/g-treated group, but a comparative analysis of the residence times in the corresponding 15 areas did not reveal consistent differences. The relative distances traveled by the control and treated groups at time points 20 and 40 were also comparable. The heat maps of CF-treated fish at both time points showed contrasting swim patterns with respect to those of the controls. These differential patterns were statistically supported with differences in the residence times for different areas. The relative distances traveled by the CF-treated fish were also significantly shorter. These results confirm the validity of the experimental design and indicate that a dietary BDE-47 exposure does not affect forced swimming in medaka at growing stages. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-01-01
Objectives To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose–response effect) for data from a stepped-wedge design with a hierarchical structure. Design The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Setting Routinely and annually collected national data on China from 2008 to 2012. Participants 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Outcome measures Agreement and differences in estimates of dose–response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). Results The estimated dose–response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2–4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose–response among provinces, counties and facilities were estimated, and the ‘lowest’ or ‘highest’ units by their dose–response effects were pinpointed only by the multilevel RM model. Conclusions For examining dose–response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. PMID:28399510
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
New technology based on clamping for high gradient radio frequency photogun
NASA Astrophysics Data System (ADS)
Alesini, David; Battisti, Antonio; Ferrario, Massimo; Foggetta, Luca; Lollo, Valerio; Ficcadenti, Luca; Pettinacci, Valerio; Custodio, Sean; Pirez, Eylene; Musumeci, Pietro; Palumbo, Luigi
2015-09-01
High gradient rf photoguns have been a key development to enable several applications of high quality electron beams. They allow the generation of beams with very high peak current and low transverse emittance, satisfying the tight demands for free-electron lasers, energy recovery linacs, Compton/Thomson sources and high-energy linear colliders. In the present paper we present the design of a new rf photogun recently developed in the framework of the SPARC_LAB photoinjector activities at the laboratories of the National Institute of Nuclear Physics in Frascati (LNF-INFN, Italy). This design implements several new features from the electromagnetic point of view and, more important, a novel technology for its realization that does not involve any brazing process. From the electromagnetic point of view the gun presents high mode separation, low peak surface electric field at the iris and minimized pulsed heating on the coupler. For the realization, we have implemented a novel fabrication design that, avoiding brazing, strongly reduces the cost, the realization time and the risk of failure. Details on the electromagnetic design, low power rf measurements and high power radiofrequency and beam tests performed at the University of California in Los Angeles (UCLA) are discussed in the paper.
Design of a detection system of highlight LED arrays' effect on the human organization
NASA Astrophysics Data System (ADS)
Chen, Shuwang; Shi, Guiju; Xue, Tongze; Liu, Yanming
2009-05-01
LED (Light Emitting Diode) has many advantages in the intensity, wavelength, practicality and price, so it is feasible to apply in biomedicine engineering. A system for the research on the effect of highlight LED arrays to human organization is designed. The temperature of skin surface can rise if skin and organization are in irradiation by highlight LED arrays. The metabolism and blood circulation of corresponding position will be quicker than those not in the shine, so the surface temperature will vary in different position of skin. The structure of LED source arrays system is presented and a measure system for studying LED's influence on human organization is designed. The temperature values of shining point are detected by infrared temperature detector. Temperature change is different according to LED parameters, such as the number, irradiation time and luminous intensity of LED. Experimental device is designed as an LED arrays pen. The LED arrays device is used to shine the points of human body, then it may effect on personal organization as well as the acupuncture. The system is applied in curing a certain skin disease, such as age pigment, skin cancer and fleck.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Tsakalakis, Michail; Bourbakis, Nicolaos G
2014-01-01
Continuous, real-time remote monitoring through medical point--of--care (POC) systems appears to draw the interest of the scientific community for healthcare monitoring and diagnostic applications the last decades. Towards this direction a significant merit has been due to the advancements in several scientific fields. Portable, wearable and implantable apparatus may contribute to the betterment of today's healthcare system which suffers from fundamental hindrances. The number and heterogeneity of such devices and systems regarding both software and hardware components, i.e sensors, antennas, acquisition circuits, as well as the medical applications that are designed for, is impressive. Objective of the current study is to present the major technological advancements that are considered to be the driving forces in the design of such systems, to briefly state the new aspects they can deliver in healthcare and finally, the identification, categorization and a first level evaluation of them.
Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael
2012-01-01
Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521
Software used with the flux mapper at the solar parabolic dish test site
NASA Technical Reports Server (NTRS)
Miyazono, C.
1984-01-01
Software for data archiving and data display was developed for use on a Digital Equipment Corporation (DEC) PDP-11/34A minicomputer for use with the JPL-designed flux mapper. The flux mapper is a two-dimensional, high radiant energy scanning device designed to measure radiant flux energies expected at the focal point of solar parabolic dish concentrators. Interfacing to the DEC equipment was accomplished by standard RS-232C serial lines. The design of the software was dicated by design constraints of the flux-mapper controller. Early attemps at data acquisition from the flux-mapper controller were not without difficulty. Time and personnel limitations result in an alternative method of data recording at the test site with subsequent analysis accomplished at a data evaluation location at some later time. Software for plotting was also written to better visualize the flux patterns. Recommendations for future alternative development are discussed. A listing of the programs used in the anaysis is included in an appendix.
Non-inferiority cancer clinical trials: scope and purposes underlying their design.
Riechelmann, R P; Alex, A; Cruz, L; Bariani, G M; Hoff, P M
2013-07-01
Non-inferiority clinical trials (NIFCTs) aim to demonstrate that the experimental therapy has advantages over the standard of care, with acceptable loss of efficacy. We evaluated the purposes underlying the selection of a non-inferiority design in oncology and the size of their non-inferiority margins (NIFm's). All NIFCTs of cancer-directed therapies and supportive care agents published in a 10-year period were eligible. Two investigators extracted the data and independently classified the trials by their purpose to choose a non-inferiority design. Seventy-five were included: 43% received funds from industry, overall survival was the most common primary end point and 73% reported positive results. The most frequent purposes underlying the selection of a non-inferiority design were to test more conveniently administered schedules and/or less toxic treatments. In 13 (17%) trials, a clear purpose was not identified. Among the trials that reported a pre-specified NIFm, the median value was 12.5% (range 4%-25%) for trials with binary primary end points and Hazard Ratio of 1.25 (range 1.10-1.50) for trials that used time-to-event primary outcomes. Cancer NIFCT harbor serious methodological and ethical issues. Many use large NIFm and nearly one-fifth did not state a clear purpose for selecting a non-inferiority design.
Functional specifications of the annular suspension pointing system, appendix A
NASA Technical Reports Server (NTRS)
Edwards, B.
1980-01-01
The Annular Suspension Pointing System is described. The Design Realization, Evaluation and Modelling (DREAM) system, and its design description technique, the DREAM Design Notation (DDN) is employed.
Benchmark Calibration Tests Completed for Stirling Convertor Heater Head Life Assessment
NASA Technical Reports Server (NTRS)
Krause, David L.; Halford, Gary R.; Bowman, Randy R.
2005-01-01
A major phase of benchmark testing has been completed at the NASA Glenn Research Center (http://www.nasa.gov/glenn/), where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive experimentation to aid the development of an analytical life-prediction methodology. Two special-purpose test rigs subjected SRG heater-head pressure-vessel test articles to accelerated creep conditions, using the standard design temperatures to stay within the wall material s operating creep-response regime, but increasing wall stresses up to 7 times over the design point. This resulted in well-controlled "ballooning" of the heater-head hot end. The test plan was developed to provide critical input to analytical parameters in a reasonable period of time.
[Application of self-developed moxibustion thermometer in experiment teaching].
Zhang, Jing; Sun, Yan; Zhang, Yongchen; Lu, Yan
2017-04-12
In order to improve the teaching quality of moxibustion experiment, a moxibustion thermometer was self-developed to monitor the real-time and continuous data of moxibustion temperature at different time points during the experiment. After teacher's explanation and demonstration of experiment process, the students used the moxibustion thermometer to monitor the change of temperature data and extended the experiment design. In the process of experiment class, the students found the temperature of the object tested increased rapidly, arrived at the highest temperature and slowly reduced. In addition, with learned knowledge, the students were able to design the feasible experiment scheme. The self-developed moxibustion thermometer operates smoothly in actual teaching, with stable experiment data and less experiment error, which obtained satisfactory teaching effect.