NASA Technical Reports Server (NTRS)
Neiner, G. H.; Cole, G. L.; Arpasi, D. J.
1972-01-01
Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.
1974-01-01
The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
Versatile, low-cost, computer-controlled, sample positioning system for vacuum applications
NASA Technical Reports Server (NTRS)
Vargas-Aburto, Carlos; Liff, Dale R.
1991-01-01
A versatile, low-cost, easy to implement, microprocessor-based motorized positioning system (MPS) suitable for accurate sample manipulation in a Second Ion Mass Spectrometry (SIMS) system, and for other ultra-high vacuum (UHV) applications was designed and built at NASA LeRC. The system can be operated manually or under computer control. In the latter case, local, as well as remote operation is possible via the IEEE-488 bus. The position of the sample can be controlled in three linear orthogonal and one angular coordinates.
The Association Between Computer Use and Cognition Across Adulthood: Use it so You Won't Lose it?
Tun, Patricia A.; Lachman, Margie E.
2012-01-01
Understanding the association between computer use and adult cognition has been limited until now by self-selected samples with restricted ranges of age and education. Here we studied effects of computer use in a large national sample (N=2671) of adults aged 32 to 84, assessing cognition with the Brief Test of Adult Cognition by Telephone (Tun & Lachman, 2005), and executive function with the Stop and Go Switch Task (Tun & Lachman, 2008). Frequency of computer activity was associated with cognitive performance after controlling for age, sex, education, and health status: that is, individuals who used the computer frequently scored significantly higher than those who seldom used the computer. Greater computer use was also associated with better executive function on a task-switching test, even after controlling for basic cognitive ability as well as demographic variables. These findings suggest that frequent computer activity is associated with good cognitive function, particularly executive control, across adulthood into old age, especially for those with lower intellectual ability. PMID:20677884
Outdoor and indoor (subway) samples were collected by passive sampling in urban Seoul and analyzed with computer-controlled scanning electron microscopy coupled with energy dispersive x-ray spectroscopy (CCSEM-EDX). Soil/road dust particles accounted for 42-60% (by weight) of fin...
RTSPM: real-time Linux control software for scanning probe microscopy.
Chandrasekhar, V; Mehta, M M
2013-01-01
Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.
Extended Task Space Control for Robotic Manipulators
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Long, Mark K. (Inventor)
1996-01-01
The invention is a method of operating a robot in successive sampling intervals to perform a task, the robot having joints and joint actuators with actuator control loops, by decomposing the task into behavior forces, accelerations, velocities and positions of plural behaviors to be exhibited by the robot simultaneously, computing actuator accelerations of the joint actuators for the current sampling interval from both behavior forces, accelerations velocities and positions of the current sampling interval and actuator velocities and positions of the previous sampling interval, computing actuator velocities and positions of the joint actuators for the current sampling interval from the actuator velocities and positions of the previous sampling interval, and, finally, controlling the actuators in accordance with the actuator accelerations, velocities and positions of the current sampling interval. The actuator accelerations, velocities and positions of the current sampling interval are stored for use during the next sampling interval.
Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.
Networked event-triggered control: an introduction and research trends
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Sabih, Muhammad
2014-11-01
A physical system can be studied as either continuous time or discrete-time system depending upon the control objectives. Discrete-time control systems can be further classified into two categories based on the sampling: (1) time-triggered control systems and (2) event-triggered control systems. Time-triggered systems sample states and calculate controls at every sampling instant in a periodic fashion, even in cases when states and calculated control do not change much. This indicates unnecessary and useless data transmission and computation efforts of a time-triggered system, thus inefficiency. For networked systems, the transmission of measurement and control signals, thus, cause unnecessary network traffic. Event-triggered systems, on the other hand, have potential to reduce the communication burden in addition to reducing the computation of control signals. This paper provides an up-to-date survey on the event-triggered methods for control systems and highlights the potential research directions.
Soft Real-Time PID Control on a VME Computer
NASA Technical Reports Server (NTRS)
Karayan, Vahag; Sander, Stanley; Cageao, Richard
2007-01-01
microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.
Screen time and physical violence in 10 to 16-year-old Canadian youth.
Janssen, Ian; Boyce, William F; Pickett, William
2012-04-01
To examine the independent associations between television, computer, and video game use with physical violence in youth. The study population consisted of a representative cross-sectional sample of 9,672 Canadian youth in grades 6-10 and a 1-year longitudinal sample of 1,861 youth in grades 9-10. The number of weekly hours watching television, playing video games, and using a computer was determined. Violence was defined as engagement in ≥2 physical fights in the previous year and/or perpetration of ≥2-3 monthly episodes of physical bullying. Logistic regression was used to examine associations. In the cross-sectional sample, computer use was associated with violence independent of television and video game use. Video game use was associated with violence in girls but not boys. Television use was not associated with violence after controlling for the other screen time measures. In the longitudinal sample, video game use was a significant predictor of violence after controlling for the other screen time measures. Computer and video game use were the screen time measures most strongly related to violence in this large sample of youth.
Provable classically intractable sampling with measurement-based computation in constant time
NASA Astrophysics Data System (ADS)
Sanders, Stephen; Miller, Jacob; Miyake, Akimasa
We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.
Flow through electrode with automated calibration
Szecsody, James E [Richland, WA; Williams, Mark D [Richland, WA; Vermeul, Vince R [Richland, WA
2002-08-20
The present invention is an improved automated flow through electrode liquid monitoring system. The automated system has a sample inlet to a sample pump, a sample outlet from the sample pump to at least one flow through electrode with a waste port. At least one computer controls the sample pump and records data from the at least one flow through electrode for a liquid sample. The improvement relies upon (a) at least one source of a calibration sample connected to (b) an injection valve connected to said sample outlet and connected to said source, said injection valve further connected to said at least one flow through electrode, wherein said injection valve is controlled by said computer to select between said liquid sample or said calibration sample. Advantages include improved accuracy because of more frequent calibrations, no additional labor for calibration, no need to remove the flow through electrode(s), and minimal interruption of sampling.
ERIC Educational Resources Information Center
Mitchell, Eugene E., Ed.
The simulation of a sampled-data system is described that uses a full parallel hybrid computer. The sampled data system simulated illustrates the proportional-integral-derivative (PID) discrete control of a continuous second-order process representing a stirred-tank. The stirred-tank is simulated using continuous analog components, while PID…
10 CFR Appendix B to Subpart F of... - Sampling Plan For Enforcement Testing
Code of Federal Regulations, 2011 CFR
2011-01-01
... performance of the n 1 units in the first sample as follows: ER18MR98.012 Step 5. Compute the upper control limit (UCL1) and lower control limit (LCL1) for the mean of the first sample using the applicable DOE... the mean of the first sample (x 1) with the upper and lower control limits (UCL1 and LCL1) to...
Concerns about the environmental and public health effects of particulate matter (PM) have stimulated interest in analytical techniques capable of measuring the size and chemical composition of individual aerosol particles. Computer-controlled scanning electron microscopy (CCSE...
Submillisecond Optical Knife-Edge Testing
NASA Technical Reports Server (NTRS)
Thurlow, P.
1983-01-01
Fast computer-controlled sampling of optical knife-edge response (KER) signal increases accuracy of optical system aberration measurement. Submicrosecond-response detectors in optical focal plane convert optical signals to electrical signals converted to digital data, sampled and feed into computer for storage and subsequent analysis. Optical data are virtually free of effects of index-of-refraction gradients.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
SAVLOC, computer program for automatic control and analysis of X-ray fluorescence experiments
NASA Technical Reports Server (NTRS)
Leonard, R. F.
1977-01-01
A program for a PDP-15 computer is presented which provides for control and analysis of trace element determinations by using X-ray fluorescence. The program simultaneously handles data accumulation for one sample and analysis of data from previous samples. Data accumulation consists of sample changing, timing, and data storage. Analysis requires the locating of peaks in X-ray spectra, determination of intensities of peaks, identification of origins of peaks, and determination of a real density of the element responsible for each peak. The program may be run in either a manual (supervised) mode or an automatic (unsupervised) mode.
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Verification of hypergraph states
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Takeuchi, Yuki; Hayashi, Masahito
2017-12-01
Hypergraph states are generalizations of graph states where controlled-Z gates on edges are replaced with generalized controlled-Z gates on hyperedges. Hypergraph states have several advantages over graph states. For example, certain hypergraph states, such as the Union Jack states, are universal resource states for measurement-based quantum computing with only Pauli measurements, while graph state measurement-based quantum computing needs non-Clifford basis measurements. Furthermore, it is impossible to classically efficiently sample measurement results on hypergraph states unless the polynomial hierarchy collapses to the third level. Although several protocols have been proposed to verify graph states with only sequential single-qubit Pauli measurements, there was no verification method for hypergraph states. In this paper, we propose a method for verifying a certain class of hypergraph states with only sequential single-qubit Pauli measurements. Importantly, no i.i.d. property of samples is assumed in our protocol: any artificial entanglement among samples cannot fool the verifier. As applications of our protocol, we consider verified blind quantum computing with hypergraph states, and quantum computational supremacy demonstrations with hypergraph states.
Data processing for water monitoring system
NASA Technical Reports Server (NTRS)
Monford, L.; Linton, A. T.
1978-01-01
Water monitoring data acquisition system is structured about central computer that controls sampling and sensor operation, and analyzes and displays data in real time. Unit is essentially separated into two systems: computer system, and hard wire backup system which may function separately or with computer.
Effects of computing time delay on real-time control systems
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Cui, Xianzhong
1988-01-01
The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.
Feedback quantum control of molecular electronic population transfer
NASA Astrophysics Data System (ADS)
Bardeen, Christopher J.; Yakovlev, Vladislav V.; Wilson, Kent R.; Carpenter, Scott D.; Weber, Peter M.; Warren, Warren S.
1997-11-01
Feedback quantum control, where the sample `teaches' a computer-controlled arbitrary lightform generator to find the optimal light field, is experimentally demonstrated for a molecular system. Femtosecond pulses tailored by a computer-controlled acousto-optic pulse shaper excite fluorescence from laser dye molecules in solution. Fluorescence and laser power are monitored, and the computer uses the experimental data and a genetic algorithm to optimize population transfer from ground to first excited state. Both efficiency (the ratio of excited state population to laser energy) and effectiveness (total excited state population) are optimized. Potential use as an `automated theory tester' is discussed.
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
Does exposure to computers affect the routine parameters of semen quality?
Sun, Yue-Lian; Zhou, Wei-Jin; Wu, Jun-Qing; Gao, Er-Sheng
2005-09-01
To assess whether exposure to computers harms the semen quality of healthy young men. A total of 178 subjects were recruited from two maternity and children healthcare centers in Shanghai, 91 with a history of exposure to computers (i.e., exposure for 20 h or more per week in the last 2 years) and 87 persons to act as control (no or little exposure to computers). Data on the history of exposure to computers and other characteristics were obtained by means of a structured questionnaire interview. Semen samples were collected by masturbation in the place where the semen samples were analyzed. No differences in the distribution of the semen parameters (semen volume, sperm density, percentage of progressive sperm, sperm viability and percentage of normal form sperm) were found between the exposed group and the control group. Exposure to computers was not found to be a risk factor for inferior semen quality after adjusting for potential confounders, including abstinence days, testicle size, occupation, history of exposure to toxic substances. The present study did not find that healthy men exposed to computers had inferior semen quality.
Computer controlled fluorometer device and method of operating same
Kolber, Z.; Falkowski, P.
1990-07-17
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.
Computer controlled fluorometer device and method of operating same
Kolber, Zbigniew; Falkowski, Paul
1990-01-01
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.
A multitasking finite state architecture for computer control of an electric powertrain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burba, J.C.
1984-01-01
Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
Mathias, Patrick C; Turner, Emily H; Scroggins, Sheena M; Salipante, Stephen J; Hoffman, Noah G; Pritchard, Colin C; Shirts, Brian H
2016-03-01
To apply techniques for ancestry and sex computation from next-generation sequencing (NGS) data as an approach to confirm sample identity and detect sample processing errors. We combined a principal component analysis method with k-nearest neighbors classification to compute the ancestry of patients undergoing NGS testing. By combining this calculation with X chromosome copy number data, we determined the sex and ancestry of patients for comparison with self-report. We also modeled the sensitivity of this technique in detecting sample processing errors. We applied this technique to 859 patient samples with reliable self-report data. Our k-nearest neighbors ancestry screen had an accuracy of 98.7% for patients reporting a single ancestry. Visual inspection of principal component plots was consistent with self-report in 99.6% of single-ancestry and mixed-ancestry patients. Our model demonstrates that approximately two-thirds of potential sample swaps could be detected in our patient population using this technique. Patient ancestry can be estimated from NGS data incidentally sequenced in targeted panels, enabling an inexpensive quality control method when coupled with patient self-report. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Laganá, Luciana; García, James J.
2013-01-01
Introduction: We preliminarily explored the effects of computer and internet training in older age and attempted to address the diversity gap in the ethnogeriatric literature, given that, in our study’s sample, only one-third of the participants self-identified as White. The aim of this investigation was to compare two groups - the control and the experimental conditions - regarding theme 1) computer attitudes and related self-efficacy, and theme 2) self-esteem and depressive symptomatology. Methods: Sixty non-institutionalized residents of Los Angeles County (mean age ± SD: 69.12 ± 10.37 years; age range: 51-92) were randomly assigned to either the experimental group (n=30) or the waitlist/control group (n=30). The experimental group was involved in 6 weeks of one-on-one computer and internet training for one 2-hour session per week. The same training was administered to the control participants after their post-test. Outcome measures included the four variables, organized into the two aforementioned themes. Results: There were no significant between-group differences in either post-test computer attitudes or self-esteem. However, findings revealed that the experimental group reported greater computer self-efficacy, compared to the waitlist/control group, at post-test/follow-up [F(1,56)=28.89, p=0.001, η2=0.01]. Additionally, at the end of the computer and internet training, there was a substantial and statistically significant decrease in depression scores among those in the experimental group when compared to the waitlist/control group [F(1,55)=9.06, p<0.004, η2=0.02]. Conclusions: There were significant improvements in favour of the experimental group in computer self-efficacy and, of noteworthy clinical relevance, in depression, as evidenced by a decreased percentage of significantly depressed experimental subjects from 36.7% at baseline to 16.7% at the end of our intervention. PMID:24151452
The use of computers in a materials science laboratory
NASA Technical Reports Server (NTRS)
Neville, J. P.
1990-01-01
The objective is to make available a method of easily recording the microstructure of a sample by means of a computer. The method requires a minimum investment and little or no instruction on the operation of a computer. An outline of the setup involving a black and white TV camera, a digitizer control box, a metallurgical microscope and a computer screen, printer, and keyboard is shown.
Computer graphics for quality control in the INAA of geological samples
Grossman, J.N.; Baedecker, P.A.
1987-01-01
A data reduction system for the routine instrumental activation analysis of samples is described, with particular emphasis on interactive graphics capabilities for evaluating analytical quality. Graphics procedures have been developed to interactively control the analysis of selected photopeaks during spectral analysis, and to evaluate detector performance during a given counting cycle. Graphics algorithms are also used to compare the data on reference samples with accepted values, to prepare quality control charts to evaluate long term precision and to search for systematic variations in data on reference samples as a function of time. ?? 1987 Akade??miai Kiado??.
Eye-gaze determination of user intent at the computer interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, J.H.; Schryver, J.C.
1993-12-31
Determination of user intent at the computer interface through eye-gaze monitoring can significantly aid applications for the disabled, as well as telerobotics and process control interfaces. Whereas current eye-gaze control applications are limited to object selection and x/y gazepoint tracking, a methodology was developed here to discriminate a more abstract interface operation: zooming-in or out. This methodology first collects samples of eve-gaze location looking at controlled stimuli, at 30 Hz, just prior to a user`s decision to zoom. The sample is broken into data frames, or temporal snapshots. Within a data frame, all spatial samples are connected into a minimummore » spanning tree, then clustered, according to user defined parameters. Each cluster is mapped to one in the prior data frame, and statistics are computed from each cluster. These characteristics include cluster size, position, and pupil size. A multiple discriminant analysis uses these statistics both within and between data frames to formulate optimal rules for assigning the observations into zooming, zoom-out, or no zoom conditions. The statistical procedure effectively generates heuristics for future assignments, based upon these variables. Future work will enhance the accuracy and precision of the modeling technique, and will empirically test users in controlled experiments.« less
NASA Astrophysics Data System (ADS)
Stadler, Philipp; Farnleitner, Andreas H.; Zessner, Matthias
2016-04-01
This presentation describes in-depth how a low cost micro-computer was used for substantial improvement of established measuring systems due to the construction and implementation of a purposeful complementary device for on-site sample pretreatment. A fully automated on-site device was developed and field-tested, that enables water sampling with simultaneous filtration as well as effective cleaning procedure of the devicés components. The described auto-sampler is controlled by a low-cost one-board computer and designed for sample pre-treatment, with minimal sample alteration, to meet requirements of on-site measurement devices that cannot handle coarse suspended solids within the measurement procedure or -cycle. The automated sample pretreatment was tested for over one year for rapid and on-site enzymatic activity (beta-D-glucuronidase, GLUC) determination in sediment laden stream water. The formerly used proprietary sampling set-up was assumed to lead to a significant damping of the measurement signal due to its susceptibility to clogging, debris- and bio film accumulation. Results show that the installation of the developed apparatus considerably enhanced error-free running time of connected measurement devices and increased the measurement accuracy to an up-to-now unmatched quality.
Design of an online EEG based neurofeedback game for enhancing attention and memory.
Thomas, Kavitha P; Vinod, A P; Guan, Cuntai
2013-01-01
Brain-Computer Interface (BCI) is an alternative communication and control channel between brain and computer which finds applications in neuroprosthetics, brain wave controlled computer games etc. This paper proposes an Electroencephalogram (EEG) based neurofeedback computer game that allows the player to control the game with the help of attention based brain signals. The proposed game protocol requires the player to memorize a set of numbers in a matrix, and to correctly fill the matrix using his attention. The attention level of the player is quantified using sample entropy features of EEG. The statistically significant performance improvement of five healthy subjects after playing a number of game sessions demonstrates the effectiveness of the proposed game in enhancing their concentration and memory skills.
Holmes, Thomas D; Guilmette, Raymond A; Cheng, Yung Sung; Parkhurst, Mary Ann; Hoover, Mark D
2009-03-01
The Capstone Depleted Uranium (DU) Aerosol Study was undertaken to obtain aerosol samples resulting from a large-caliber DU penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post perforation, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used to achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the crew locations in the test vehicles. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for measurement of chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for DU concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmes, Thomas D.; Guilmette, Raymond A.; Cheng, Yung-Sung
2009-03-01
The Capstone Depleted Uranium Aerosol Study was undertaken to obtain aerosol samples resulting from a kinetic-energy cartridge with a large-caliber depleted uranium (DU) penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post-impact, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used tomore » achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the vehicle commander, loader, gunner, and driver. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for depleted uranium concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.« less
An investigation of potential applications of OP-SAPS: Operational sampled analog processors
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Mcvey, E. S.
1976-01-01
The impact of charge-coupled device (CCD) processors on future instrumentation was investigated. The CCD devices studied process sampled analog data and are referred to as OP-SAPS - operational sampled analog processors. Preliminary studies into various architectural configurations for systems composed of OP-SAPS show that they have potential in such diverse applications as pattern recognition and automatic control. It appears probable that OP-SAPS may be used to construct computing structures which can serve as special peripherals to large-scale computer complexes used in real time flight simulation. The research was limited to the following benchmark programs: (1) face recognition, (2) voice command and control, (3) terrain classification, and (4) terrain identification. A small amount of effort was spent on examining a method by which OP-SAPS may be used to decrease the limiting ground sampling distance encountered in remote sensing from satellites.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
Formation Flying Control Implementation in Highly Elliptical Orbits
NASA Technical Reports Server (NTRS)
Capo-Lugo, Pedro A.; Bainum, Peter M.
2009-01-01
The Tschauner-Hempel equations are widely used to correct the separation distance drifts between a pair of satellites within a constellation in highly elliptical orbits [1]. This set of equations was discretized in the true anomaly angle [1] to be used in a digital steady-state hierarchical controller [2]. This controller [2] performed the drift correction between a pair of satellites within the constellation. The objective of a discretized system is to develop a simple algorithm to be implemented in the computer onboard the satellite. The main advantage of the discrete systems is that the computational time can be reduced by selecting a suitable sampling interval. For this digital system, the amount of data will depend on the sampling interval in the true anomaly angle [3]. The purpose of this paper is to implement the discrete Tschauner-Hempel equations and the steady-state hierarchical controller in the computer onboard the satellite. This set of equations is expressed in the true anomaly angle in which a relation will be formulated between the time and the true anomaly angle domains.
Digital flight control systems
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Vanlandingham, H. F.
1977-01-01
The design of stable feedback control laws for sampled-data systems with variable rate sampling was investigated. These types of sampled-data systems arise naturally in digital flight control systems which use digital actuators where it is desirable to decrease the number of control computer output commands in order to save wear and tear of the associated equipment. The design of aircraft control systems which are optimally tolerant of sensor and actuator failures was also studied. Detection of the failed sensor or actuator must be resolved and if the estimate of the state is used in the control law, then it is also desirable to have an estimator which will give the optimal state estimate even under the failed conditions.
Alpha Control - A new Concept in SPM Control
NASA Astrophysics Data System (ADS)
Spizig, P.; Sanchen, D.; Volswinkler, G.; Ibach, W.; Koenen, J.
2006-03-01
Controlling modern Scanning Probe Microscopes demands highly sophisticated electronics. While flexibility and powerful computing power is of great importance in facilitating the variety of measurement modes, extremely low noise is also a necessity. Accordingly, modern SPM Controller designs are based on digital electronics to overcome the drawbacks of analog designs. While todays SPM controllers are based on DSPs or Microprocessors and often still incorporate analog parts, we are now introducing a completely new approach: Using a Field Programmable Gate Array (FPGA) to implement the digital control tasks allows unrivalled data processing speed by computing all tasks in parallel within a single chip. Time consuming task switching between data acquisition, digital filtering, scanning and the computing of feedback signals can be completely avoided. Together with a star topology to avoid any bus limitations in accessing the variety of ADCs and DACs, this design guarantees for the first time an entirely deterministic timing capability in the nanosecond regime for all tasks. This becomes especially useful for any external experiments which must be synchronized with the scan or for high speed scans that require not only closed loop control of the scanner, but also dynamic correction of the scan movement. Delicate samples additionally benefit from extremely high sample rates, allowing highly resolved signals and low noise levels.
Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...
Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...
The control of a manipulator by a computer model of the cerebellum.
NASA Technical Reports Server (NTRS)
Albus, J. S.
1973-01-01
Extension of previous work by Albus (1971, 1972) on the theory of cerebellar function to an application of a computer model of the cerebellum to manipulator control. Following a discussion of the cerebellar function and of a perceptron analogy of the cerebellum, particularly in regard to learning, an electromechanical model of the cerebellum is considered in the form of an IBM 1800 computer connected to a Rancho Los Amigos arm with seven degrees of freedom. It is shown that the computer memory makes it possible to train the arm on some representative sample of the universe of possible states and to achieve satisfactory performance.
Digital Plasma Control System for Alcator C-Mod
NASA Astrophysics Data System (ADS)
Ferrara, M.; Wolfe, S.; Stillerman, J.; Fredian, T.; Hutchinson, I.
2004-11-01
A digital plasma control system (DPCS) has been designed to replace the present C-Mod system, which is based on hybrid analog-digital computer. The initial implementation of DPCS comprises two 64 channel, 16 bit, low-latency cPCI digitizers, each with 16 analog outputs, controlled by a rack-mounted single-processor Linux server, which also serves as the compute engine. A prototype system employing three older 32 channel digitizers was tested during the 2003-04 campaign. The hybrid's linear PID feedback system was emulated by IDL code executing a synchronous loop, using the same target waveforms and control parameters. Reliable real-time operation was accomplished under a standard Linux OS (RH9) by locking memory and disabling interrupts during the plasma pulse. The DPCS-computed outputs agreed to within a few percent with those produced by the hybrid system, except for discrepancies due to offsets and non-ideal behavior of the hybrid circuitry. The system operated reliably, with no sample loss, at more than twice the 10kHz design specification, providing extra time for implementing more advanced control algorithms. The code is fault-tolerant and produces consistent output waveforms even with 10% sample loss.
Sampling free energy surfaces as slices by combining umbrella sampling and metadynamics.
Awasthi, Shalini; Kapil, Venkat; Nair, Nisanth N
2016-06-15
Metadynamics (MTD) is a very powerful technique to sample high-dimensional free energy landscapes, and due to its self-guiding property, the method has been successful in studying complex reactions and conformational changes. MTD sampling is based on filling the free energy basins by biasing potentials and thus for cases with flat, broad, and unbound free energy wells, the computational time to sample them becomes very large. To alleviate this problem, we combine the standard Umbrella Sampling (US) technique with MTD to sample orthogonal collective variables (CVs) in a simultaneous way. Within this scheme, we construct the equilibrium distribution of CVs from biased distributions obtained from independent MTD simulations with umbrella potentials. Reweighting is carried out by a procedure that combines US reweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram Analysis Method (WHAM). The approach is ideal for a controlled sampling of a CV in a MTD simulation, making it computationally efficient in sampling flat, broad, and unbound free energy surfaces. This technique also allows for a distributed sampling of a high-dimensional free energy surface, further increasing the computational efficiency in sampling. We demonstrate the application of this technique in sampling high-dimensional surface for various chemical reactions using ab initio and QM/MM hybrid molecular dynamics simulations. Further, to carry out MTD bias reweighting for computing forward reaction barriers in ab initio or QM/MM simulations, we propose a computationally affordable approach that does not require recrossing trajectories. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
NASA Technical Reports Server (NTRS)
Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.; Gilbert, Percy
1989-01-01
Computer-controlled thermal-wave microscope developed to investigate III-V compound semiconductor devices and materials. Is nondestructive technique providing information on subsurface thermal features of solid samples. Furthermore, because this is subsurface technique, three-dimensional imaging also possible. Microscope uses intensity-modulated electron beam of modified scanning electron microscope to generate thermal waves in sample. Acoustic waves generated by thermal waves received by transducer and processed in computer to form images displayed on video display of microscope or recorded on magnetic disk.
Active flutter suppression using optical output feedback digital controllers
NASA Technical Reports Server (NTRS)
1982-01-01
A method for synthesizing digital active flutter suppression controllers using the concept of optimal output feedback is presented. A convergent algorithm is employed to determine constrained control law parameters that minimize an infinite time discrete quadratic performance index. Low order compensator dynamics are included in the control law and the compensator parameters are computed along with the output feedback gain as part of the optimization process. An input noise adjustment procedure is used to improve the stability margins of the digital active flutter controller. Sample rate variation, prefilter pole variation, control structure variation and gain scheduling are discussed. A digital control law which accommodates computation delay can stabilize the wing with reasonable rms performance and adequate stability margins.
Identification and Quantitative Measurements of Chemical Species by Mass Spectrometry
NASA Technical Reports Server (NTRS)
Zondlo, Mark A.; Bomse, David S.
2005-01-01
The development of a miniature gas chromatograph/mass spectrometer system for the measurement of chemical species of interest to combustion is described. The completed system is a fully-contained, automated instrument consisting of a sampling inlet, a small-scale gas chromatograph, a miniature, quadrupole mass spectrometer, vacuum pumps, and software. A pair of computer-driven valves controls the gas sampling and introduction to the chromatographic column. The column has a stainless steel exterior and a silica interior, and contains an adsorbent of that is used to separate organic species. The detection system is based on a quadrupole mass spectrometer consisting of a micropole array, electrometer, and a computer interface. The vacuum system has two miniature pumps to maintain the low pressure needed for the mass spectrometer. A laptop computer uses custom software to control the entire system and collect the data. In a laboratory demonstration, the system separated calibration mixtures containing 1000 ppm of alkanes and alkenes.
Proposal for Microwave Boson Sampling.
Peropadre, Borja; Guerreschi, Gian Giacomo; Huh, Joonsuk; Aspuru-Guzik, Alán
2016-09-30
Boson sampling, the task of sampling the probability distribution of photons at the output of a photonic network, is believed to be hard for any classical device. Unlike other models of quantum computation that require thousands of qubits to outperform classical computers, boson sampling requires only a handful of single photons. However, a scalable implementation of boson sampling is missing. Here, we show how superconducting circuits provide such platform. Our proposal differs radically from traditional quantum-optical implementations: rather than injecting photons in waveguides, making them pass through optical elements like phase shifters and beam splitters, and finally detecting their output mode, we prepare the required multiphoton input state in a superconducting resonator array, control its dynamics via tunable and dispersive interactions, and measure it with nondemolition techniques.
Using a computer controlled system, this ultrafiltration device automates the process of concentrating a water sample and can be operated in the field. The system was also designed to reduce human exposure to potentially contaminated water.
NASA Astrophysics Data System (ADS)
Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng
In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.
1997-02-01
application with a strong resemblance to a video game , concern has been raised that prior video game experience might have a moderating effect on scores. Much...such as spatial ability. The effects of computer or video game experience on work sample scores have not been systematically investigated. The purpose...of this study was to evaluate the incremental validity of prior video game experience over that of general aptitude as a predictor of work sample test
NASA Astrophysics Data System (ADS)
Weagant, Scott; Karanassios, Vassili
2015-06-01
The use of portable hand held computing devices for the acquisition of spectrochemical data is briefly discussed using examples from the author's laboratory. Several network topologies are evaluated. At present, one topology that involves a portable computing device for data acquisition and spectrometer control and that has wireless access to the internet at one end and communicates with a smart phone at the other end appears to be better suited for "taking part of the lab to the sample" types of applications. Thus, spectrometric data can be accessed from anywhere in the world.
Users manual for flight control design programs
NASA Technical Reports Server (NTRS)
Nalbandian, J. Y.
1975-01-01
Computer programs for the design of analog and digital flight control systems are documented. The program DIGADAPT uses linear-quadratic-gaussian synthesis algorithms in the design of command response controllers and state estimators, and it applies covariance propagation analysis to the selection of sampling intervals for digital systems. Program SCHED executes correlation and regression analyses for the development of gain and trim schedules to be used in open-loop explicit-adaptive control laws. A linear-time-varying simulation of aircraft motions is provided by the program TVHIS, which includes guidance and control logic, as well as models for control actuator dynamics. The programs are coded in FORTRAN and are compiled and executed on both IBM and CDC computers.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
An approximate, maximum terminal velocity descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1987-01-01
No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Preventing smoking relapse via Web-based computer-tailored feedback: a randomized controlled trial.
Elfeddali, Iman; Bolman, Catherine; Candel, Math J J M; Wiers, Reinout W; de Vries, Hein
2012-08-20
Web-based computer-tailored approaches have the potential to be successful in supporting smoking cessation. However, the potential effects of such approaches for relapse prevention and the value of incorporating action planning strategies to effectively prevent smoking relapse have not been fully explored. The Stay Quit for You (SQ4U) study compared two Web-based computer-tailored smoking relapse prevention programs with different types of planning strategies versus a control group. To assess the efficacy of two Web-based computer-tailored programs in preventing smoking relapse compared with a control group. The action planning (AP) program provided tailored feedback at baseline and invited respondents to do 6 preparatory and coping planning assignments (the first 3 assignments prior to quit date and the final 3 assignments after quit date). The action planning plus (AP+) program was an extended version of the AP program that also provided tailored feedback at 11 time points after the quit attempt. Respondents in the control group only filled out questionnaires. The study also assessed possible dose-response relationships between abstinence and adherence to the programs. The study was a randomized controlled trial with three conditions: the control group, the AP program, and the AP+ program. Respondents were daily smokers (N = 2031), aged 18 to 65 years, who were motivated and willing to quit smoking within 1 month. The primary outcome was self-reported continued abstinence 12 months after baseline. Logistic regression analyses were conducted using three samples: (1) all respondents as randomly assigned, (2) a modified sample that excluded respondents who did not make a quit attempt in conformance with the program protocol, and (3) a minimum dose sample that also excluded respondents who did not adhere to at least one of the intervention elements. Observed case analyses and conservative analyses were conducted. In the observed case analysis of the randomized sample, abstinence rates were 22% (45/202) in the control group versus 33% (63/190) in the AP program and 31% (53/174) in the AP+ program. The AP program (odds ratio 1.95, P = .005) and the AP+ program (odds ratio 1.61, P = .049) were significantly more effective than the control condition. Abstinence rates and effects differed per sample. Finally, the results suggest a dose-response relationship between abstinence and the number of program elements completed by the respondents. Despite the differences in results caused by the variation in our analysis approaches, we can conclude that Web-based computer-tailored programs combined with planning strategy assignments and feedback after the quit attempt can be effective in preventing relapse 12 months after baseline. However, adherence to the intervention seems critical for effectiveness. Finally, our results also suggest that more research is needed to assess the optimum intervention dose. Dutch Trial Register: NTR1892; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1892 (Archived by WebCite at http://www.webcitation.org/693S6uuPM).
Microstructure control for high strength 9Cr ferritic-martensitic steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Lizhen; Hoelzer, David T; Busby, Jeremy T
2012-01-01
Ferritic-martensitic (F-M) steels with 9 wt.%Cr are important structural materials for use in advanced nuclear reactors. Alloying composition adjustment, guided by computational thermodynamics, and thermomechanical treatment (TMT) were employed to develop high strength 9Cr F-M steels. Samples of four heats with controlled compositions were subjected to normalization and tempering (N&T) and TMT, respectively. Their mechanical properties were assessed by Vickers hardness and tensile testing. Ta-alloying showed significant strengthening effect. The TMT samples showed strength superior to the N&T samples with similar ductility. All the samples showed greater strength than NF616, which was either comparable to or greater than the literaturemore » data of the PM2000 oxide-dispersion-strengthened (ODS) steel at temperatures up to 650 C without noticeable reduction in ductility. A variety of microstructural analyses together with computational thermodynamics provided rational interpretations on the strength enhancement. Creep tests are being initiated because the increased yield strength of the TMT samples is not able to deduce their long-term creep behavior.« less
ERIC Educational Resources Information Center
Nkemdilim, Egbunonu Roseline; Okeke, Sam O. C.
2014-01-01
This study investigated the effects of computer-assisted instruction (CAI) on students' achievement in ecological concepts. Quasi-experimental design, specifically the pre-test post test non-equivalent control group design was adopted. The sample consisted of sixty-six (66) senior secondary year two (SS II) biology students, drawn from two…
Evaluating the Effectiveness of Computer Applications in Developing English Learning
ERIC Educational Resources Information Center
Whitaker, James Todd
2016-01-01
I examined the effectiveness of self-directed learning and English learning with computer applications on college students in Bangkok, Thailand, in a control-group experimental-group pretest-posttest design. The hypothesis was tested using a t test: two-sample assuming unequal variances to establish the significance of mean scores between the two…
Biomarker Evaluation Does Not Confirm Efficacy of Computer-Tailored Nutrition Education
ERIC Educational Resources Information Center
Kroeze, Willemieke; Dagnelie, Pieter C.; Heymans, Martijn W.; Oenema, Anke; Brug, Johannes
2011-01-01
Objective: To evaluate the efficacy of computer-tailored nutrition education with objective outcome measures. Design: A 3-group randomized, controlled trial with posttests at 1 and 6 months post-intervention. Setting: Worksites and 2 neighborhoods in the urban area of Rotterdam. Participants: A convenience sample of healthy Dutch adults (n = 442).…
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less
Polarization Imaging Apparatus with Auto-Calibration
NASA Technical Reports Server (NTRS)
Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)
2013-01-01
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.
Polarization imaging apparatus with auto-calibration
Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui
2013-08-20
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.
Polarization imaging apparatus
NASA Technical Reports Server (NTRS)
Zou, Yingyin Kevin (Inventor); Chen, Qiushui (Inventor); Zhao, Hongzhi (Inventor)
2010-01-01
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set 11, a linear polarizer 14 with its optical axis 18, a first variable phase retarder 12 with its optical axis 16 aligned 22.5.degree. to axis 18, a second variable phase retarder 13 with its optical axis 17 aligned 45.degree. to axis 18, a imaging sensor 15 for sensing the intensity images of the sample, a controller 101 and a computer 102. Two variable phase retarders 12 and 13 were controlled independently by a computer 102 through a controller unit 101 which generates a sequential of voltages to control the phase retardations of VPRs 12 and 13. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor 15 when the phase retardations of VPRs 12 and 13 were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.
Asynchronous sampled-data approach for event-triggered systems
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Memon, Azhar M.
2017-11-01
While aperiodically triggered network control systems save a considerable amount of communication bandwidth, they also pose challenges such as coupling between control and event-condition design, optimisation of the available resources such as control, communication and computation power, and time-delays due to computation and communication network. With this motivation, the paper presents separate designs of control and event-triggering mechanism, thus simplifying the overall analysis, asynchronous linear quadratic Gaussian controller which tackles delays and aperiodic nature of transmissions, and a novel event mechanism which compares the cost of the aperiodic system against a reference periodic implementation. The proposed scheme is simulated on a linearised wind turbine model for pitch angle control and the results show significant improvement against the periodic counterpart.
Fisher, Jeffrey D; Amico, K Rivet; Fisher, William A; Cornman, Deborah H; Shuper, Paul A; Trayling, Cynthia; Redding, Caroline; Barta, William; Lemieux, Anthony F; Altice, Frederick L; Dieckhaus, Kevin; Friedland, Gerald
2011-11-01
We evaluated the efficacy of LifeWindows, a theory-based, computer-administered antiretroviral (ARV) therapy adherence support intervention, delivered to HIV + patients at routine clinical care visits. 594 HIV + adults receiving HIV care at five clinics were randomized to intervention or control arms. Intervention vs. control impact in the intent-to-treat sample (including participants whose ARVs had been entirely discontinued, who infrequently attended care, or infrequently used LifeWindows) did not reach significance. Intervention impact in the On Protocol sample (328 intervention and control arm participants whose ARVs were not discontinued, who attended care and were exposed to LifeWindows regularly) was significant. On Protocol intervention vs. control participants achieved significantly higher levels of perfect 3-day ACTG-assessed adherence over time, with sensitivity analyses maintaining this effect down to 70% adherence. This study supports the utility of LifeWindows and illustrates that patients on ARVs who persist in care at clinical care sites can benefit from adherence promotion software.
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1976-01-01
The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.
Predictive control of hollow-fiber bioreactors for the production of monoclonal antibodies.
Dowd, J E; Weber, I; Rodriguez, B; Piret, J M; Kwok, K E
1999-05-20
The selection of medium feed rates for perfusion bioreactors represents a challenge for process optimization, particularly in bioreactors that are sampled infrequently. When the present and immediate future of a bioprocess can be adequately described, predictive control can minimize deviations from set points in a manner that can maximize process consistency. Predictive control of perfusion hollow-fiber bioreactors was investigated in a series of hybridoma cell cultures that compared operator control to computer estimation of feed rates. Adaptive software routines were developed to estimate the current and predict the future glucose uptake and lactate production of the bioprocess at each sampling interval. The current and future glucose uptake rates were used to select the perfusion feed rate in a designed response to deviations from the set point values. The routines presented a graphical user interface through which the operator was able to view the up-to-date culture performance and assess the model description of the immediate future culture performance. In addition, fewer samples were taken in the computer-estimated cultures, reducing labor and analytical expense. The use of these predictive controller routines and the graphical user interface decreased the glucose and lactate concentration variances up to sevenfold, and antibody yields increased by 10% to 43%. Copyright 1999 John Wiley & Sons, Inc.
Murphy, C L; McLaws, M
2000-04-01
To adopt an evidence-based approach, professionals must be able to access, identify, interpret, and critically appraise best evidence. Critical appraisal requires essential skills, such as computer literacy and an understanding of research principles. These skills also are required for professionals to contribute to evidence. In 1996, members of the Australian Infection Control Association were surveyed to establish a profile including the extent to which they were reading infection control publications, using specific documents for policy and guideline development, developing and undertaking research, publishing research, and using computers. The relationships between demographics, computer use, and research activity were examined. The response rate was 63. 4% (630/993). The study group comprised mostly women (96.1%), and most (66.4%) were older than 40 years of age. Median infection control experience was 4 years (mean, 5.4 years; range, <12 months to 35 years). When developing guidelines and policies (92.7%; 584/630), infection control professionals reviewed State Health Department Infection Control Guidelines and Regulations. Research relating to infection control was undertaken by 21.5% (135/628) of the sample, and 27.6% (37/134) of this group published their research findings. Of the respondents (51.1%; 318/622) who used a computer to undertake infection control tasks, the majority (89.0%) used a personal computer for word processing. Regardless of infection control experience, Australian infection control professionals must be adequately prepared to contribute to, access, appraise, and where appropriate, apply best evidence to their practice. We suggest that computer literacy, an understanding of research principles, and familiarity with infection control literature are three essential skills that infection control professionals must possess and regularly exercise.
One-to-One Computing and Student Achievement in Ohio High Schools
ERIC Educational Resources Information Center
Williams, Nancy L.; Larwin, Karen H.
2016-01-01
This study explores the impact of one-to-one computing on student achievement in Ohio high schools as measured by performance on the Ohio Graduation Test (OGT). The sample included 24 treatment schools that were individually paired with a similar control school. An interrupted time series methodology was deployed to examine OGT data over a period…
Cybernetic Control of an Electrochemical Repertoire.
ERIC Educational Resources Information Center
He, Peixin; And Others
1982-01-01
Describes major features of a computer-operated, cybernetic potentiostat and the development, design, and operation of the software in ROM. The instrument contains control circuitry and software making it compatible with the static mercury drop electrode produced by EG&G Princeton Applied Research Corporation. Sample results using the…
A Comparison of Wavetable and FM Data Reduction Methods for Resynthesis of Musical Sounds
NASA Astrophysics Data System (ADS)
Horner, Andrew
An ideal music-synthesis technique provides both high-level spectral control and efficient computation. Simple playback of recorded samples lacks spectral control, while additive sine-wave synthesis is inefficient. Wavetable and frequencymodulation synthesis, however, are two popular synthesis techniques that are very efficient and use only a few control parameters.
NASA Astrophysics Data System (ADS)
Liu, Hai-Tao; Wen, Zhi-Yu; Xu, Yi; Shang, Zheng-Guo; Peng, Jin-Lan; Tian, Peng
2017-09-01
In this paper, an integrated microfluidic analysis microsystems with bacterial capture enrichment and in-situ impedance detection was purposed based on microfluidic chips dielectrophoresis technique and electrochemical impedance detection principle. The microsystems include microfluidic chip, main control module, and drive and control module, and signal detection and processing modulet and result display unit. The main control module produce the work sequence of impedance detection system parts and achieve data communication functions, the drive and control circuit generate AC signal which amplitude and frequency adjustable, and it was applied on the foodborne pathogens impedance analysis microsystems to realize the capture enrichment and impedance detection. The signal detection and processing circuit translate the current signal into impendence of bacteria, and transfer to computer, the last detection result is displayed on the computer. The experiment sample was prepared by adding Escherichia coli standard sample into chicken sample solution, and the samples were tested on the dielectrophoresis chip capture enrichment and in-situ impedance detection microsystems with micro-array electrode microfluidic chips. The experiments show that the Escherichia coli detection limit of microsystems is 5 × 104 CFU/mL and the detection time is within 6 min in the optimization of voltage detection 10 V and detection frequency 500 KHz operating conditions. The integrated microfluidic analysis microsystems laid the solid foundation for rapid real-time in-situ detection of bacteria.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
Dose controlled low energy electron irradiator for biomolecular films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S. V. K., E-mail: svkk@tifr.res.in; Tare, Satej T.; Upalekar, Yogesh V.
2016-03-15
We have developed a multi target, Low Energy Electron (LEE), precise dose controlled irradiator for biomolecular films. Up to seven samples can be irradiated one after another at any preset electron energy and dose under UHV conditions without venting the chamber. In addition, one more sample goes through all the steps except irradiation, which can be used as control for comparison with the irradiated samples. All the samples are protected against stray electron irradiation by biasing them at −20 V during the entire period, except during irradiation. Ethernet based communication electronics hardware, LEE beam control electronics and computer interface weremore » developed in house. The user Graphical User Interface to control the irradiation and dose measurement was developed using National Instruments Lab Windows CVI. The working and reliability of the dose controlled irradiator has been fully tested over the electron energy range of 0.5 to 500 eV by studying LEE induced single strand breaks to ΦX174 RF1 dsDNA.« less
Abedini, Yasamin; Zamani, Bibi Eshrat; Kheradmand, Ali; Rajabizadeh, Ghodratollah
2012-01-01
Addiction to computer (video) games in adolescents and its relationship with educational progress has recently attracted the attention of rearing and education experts as well as organizations and institutes involved in physical and mental health. The current research attempted to propose a structural model of the relationships between parenting styles, mothers' occupation status, and addiction to computer games, self-control, and educational progress of secondary school students. Using multistage cluster random sampling, 500 female and male secondary school students in Kerman (Iran) were selected and studied. The research tools included self-control, parenting styles, and addiction to computer games questionnaires and a self-made questionnaire containing demographic details. The data was analyzed using exploratory factor analysis, Cronbach's alpha coefficient and route analysis (in LISREL). We found self-control to have a linking role in the relationship between four parenting styles and educational progress. Mothers' occupation status was directly and significantly correlated with addiction to computer games. Although four parenting styles directly and significantly affected addiction to computer games, the findings did not support the linking role of addiction to computer games in the relationship between four parenting styles and educational progress. In agreement with previous studies, the current research reflected the impact of four parenting styles on self-control, addiction to computer games, and educational progress of students. Among the parenting styles, authoritative style can affect the severity of addiction to computer games through self-control development. It can thus indirectly influence the educational progress of students. Parents are recommended to use authoritative parenting style to help both self-management and psychological health of their children. The employed mothers are also recommended to have more supervision and control on the degree and type of computer games selected by their children.
Abedini, Yasamin; Zamani, Bibi Eshrat; Kheradmand, Ali; Rajabizadeh, Ghodratollah
2012-01-01
Background Addiction to computer (video) games in adolescents and its relationship with educational progress has recently attracted the attention of rearing and education experts as well as organizations and institutes involved in physical and mental health. The current research attempted to propose a structural model of the relationships between parenting styles, mothers’ occupation status, and addiction to computer games, self-control, and educational progress of secondary school students. Methods Using multistage cluster random sampling, 500 female and male secondary school students in Kerman (Iran) were selected and studied. The research tools included self-control, parenting styles, and addiction to computer games questionnaires and a self-made questionnaire containing demographic details. The data was analyzed using exploratory factor analysis, Cronbach’s alpha coefficient and route analysis (in LISREL). Findings We found self-control to have a linking role in the relationship between four parenting styles and educational progress. Mothers’ occupation status was directly and significantly correlated with addiction to computer games. Although four parenting styles directly and significantly affected addiction to computer games, the findings did not support the linking role of addiction to computer games in the relationship between four parenting styles and educational progress. Conclusion In agreement with previous studies, the current research reflected the impact of four parenting styles on self-control, addiction to computer games, and educational progress of students. Among the parenting styles, authoritative style can affect the severity of addiction to computer games through self-control development. It can thus indirectly influence the educational progress of students. Parents are recommended to use authoritative parenting style to help both self-management and psychological health of their children. The employed mothers are also recommended to have more supervision and control on the degree and type of computer games selected by their children. PMID:24494143
A Homing Missile Control System to Reduce the Effects of Radome Diffraction
NASA Technical Reports Server (NTRS)
Smith, Gerald L.
1960-01-01
The problem of radome diffraction in radar-controlled homing missiles at high speeds and high altitudes is considered from the point of view of developing a control system configuration which will alleviate the deleterious effects of the diffraction. It is shown that radome diffraction is in essence a kinematic feedback of body angular velocities which causes the radar to sense large apparent line-of-sight angular velocities. The normal control system cannot distinguish between the erroneous and actual line-of-sight rates, and entirely wrong maneuvers are produced which result in large miss distances. The problem is resolved by adding to the control system a special-purpose computer which utilizes measured body angular velocity to extract from the radar output true line-of-sight information for use in steering the missile. The computer operates on the principle of sampling and storing the radar output at instants when the body angular velocity is low and using this stored information for maneuvering commands. In addition, when the angular velocity is not low the computer determines a radome diffraction compensation which is subtracted from the radar output to reduce the error in the sampled information. Analog simulation results for the proposed control system operating in a coplanar (vertical plane) attack indicate a potential decrease in miss distance to an order of magnitude below that for a conventional system. Effects of glint noise, random target maneuvers, initial heading errors, and missile maneuverability are considered in the investigation.
ERIC Educational Resources Information Center
Olori, Abiola Lateef; Igbosanu, Adekunle Olusegun
2016-01-01
The study was carried out to determine the use of computer-based multimedia presentation on Senior Secondary School Students' Achievement in Agricultural Science. The study was a quasi-experimental, pre-test, post-test control group research design type, using intact classes. A sample of eighty (80) Senior Secondary School One (SS II) students was…
Fast ForWord: An Investigation of the Effectiveness of Computer-Assisted Reading Intervention
ERIC Educational Resources Information Center
Soboleski, Penny K.
2011-01-01
The three-fold purpose of this quasi-experimental study was to examine the impact of the computer-based reading program Fast ForWord (FFW) on the reading achievement of second-grade students in an Ohio school district. The sample included 360 students (treatment group, n = 85; control group, n = 275) from four elementary buildings. FFW is an…
NASA Technical Reports Server (NTRS)
Bekey, G. A.
1971-01-01
Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.
Moon-Mars simulation campaign in volcanic Eifel: Remote science support and sample analysis
NASA Astrophysics Data System (ADS)
Offringa, Marloes; Foing, Bernard H.; Kamps, Oscar
2016-07-01
Moon-Mars analogue missions using a mock-up lander that is part of the ESA/ILEWG ExoGeoLab project were conducted during Eifel field campaigns in 2009, 2015 and 2016 (Foing et al., 2010). In the last EuroMoonMars2016 campaign the lander was used to conduct reconnaissance experiments and in situ geological scientific analysis of samples, with a payload that mainly consisted of a telescope and a UV-VIS reflectance spectrometer. The aim of the campaign was to exhibit possibilities for the ExoGeoLab lander to perform remotely controlled experiments and test its applicability in the field by simulating the interaction with astronauts. The Eifel region in Germany where the experiments with the ExoGeoLab lander were conducted is a Moon-Mars analogue due to its geological setting and volcanic rock composition. The research conducted by analysis equipment on the lander could function in support of Moon-Mars sample return missions, by providing preliminary insight into characteristics of the analyzed samples. The set-up of the prototype lander was that of a telescope with camera and solar power equipment deployed on the top, the UV-VIS reflectance spectrometer together with computers and a sample webcam were situated in the middle compartment and to the side a sample analysis test bench was attached, attainable by astronauts from outside the lander. An alternative light source that illuminated the samples in case of insufficient daylight was placed on top of the lander and functioned on solar power. The telescope, teleoperated from a nearby stationed pressurized transport vehicle that functioned as a base control center, attained an overview of the sampling area and assisted the astronauts in their initial scouting pursuits. Locations of suitable sampling sites based on these obtained images were communicated to the astronauts, before being acquired during a simulated EVA. Sampled rocks and soils were remotely analyzed by the base control center, while the astronauts assisted by placing the samples onto the sample holder and adjusting test bench settings in order to obtain spectra. After analysis the collected samples were documented and stored by the astronauts, before returning to the base. Points of improvement for the EuroMoonMars2016 analog campaign are the remote control of the computers using an established network between the base and the lander. During following missions the computers should preferably be operated over a larger distance without interference. In the bottom compartment of the lander a rover is stored that in future campaigns could replace astronaut functions by collecting and returning samples, as well as performing adjustments to the analysis test bench by using a remotely controlled robotic arm. Acknowledgements: we thank Dominic Doyle for ESTEC optical lab support, Aidan Cowley (EAC) and Matthias Sperl (DLR) for support discussions, and collaborators from EuroMoonMars Eifel 2015-16 campaign team.
Generalized concurrence in boson sampling.
Chin, Seungbeom; Huh, Joonsuk
2018-04-17
A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.
Jesse, Stephen [Knoxville, TN; Geohegan, David B [Knoxville, TN; Guillorn, Michael [Brooktondale, NY
2009-02-17
Methods and apparatus are described for SEM imaging and measuring electronic transport in nanocomposites based on electric field induced contrast. A method includes mounting a sample onto a sample holder, the sample including a sample material; wire bonding leads from the sample holder onto the sample; placing the sample holder in a vacuum chamber of a scanning electron microscope; connecting leads from the sample holder to a power source located outside the vacuum chamber; controlling secondary electron emission from the sample by applying a predetermined voltage to the sample through the leads; and generating an image of the secondary electron emission from the sample. An apparatus includes a sample holder for a scanning electron microscope having an electrical interconnect and leads on top of the sample holder electrically connected to the electrical interconnect; a power source and a controller connected to the electrical interconnect for applying voltage to the sample holder to control the secondary electron emission from a sample mounted on the sample holder; and a computer coupled to a secondary electron detector to generate images of the secondary electron emission from the sample.
1989-10-01
REVIEW MENU PROGRAM (S) CHAPS PURPOSE AND OVERVIEV The Do Review menu allows the user to select which missions to perform detailed analysis on and...input files must be resident on the computer you are running SUPR on. Any interface or file transfer programs must be successfully executed prior to... COMPUTER PROGRAM WAS DEVELOPED BY SYSTEMS CONTROL TECHNOLOGY FOR THE DEPUTY CHIEF OF STAFF/OPERATIONS,HQ USAFE. THE USE OF THE COMPUTER PROGRAM IS
Evaluation of 3D airway imaging of obstructive sleep apnea with cone-beam computed tomography.
Ogawa, Takumi; Enciso, Reyes; Memon, Ahmed; Mah, James K; Clark, Glenn T
2005-01-01
This study evaluates the use of cone-beam Computer Tomography (CT) for imaging the upper airway structure of Obstructive Sleep Apnea (OSA) patients. The total airway volume and the anteroposterior dimension of oropharyngeal airway showed significant group differences between OSA and gender-matched controls, so if we increase sample size these measurements may distinguish the two groups. We demonstrate the utility of diagnosis of anatomy with the 3D airway imaging with cone-beam Computed Tomography.
A computer controlled signal preprocessor for laser fringe anemometer applications
NASA Technical Reports Server (NTRS)
Oberle, Lawrence G.
1987-01-01
The operation of most commercially available laser fringe anemometer (LFA) counter-processors assumes that adjustments are made to the signal processing independent of the computer used for reducing the data acquired. Not only does the researcher desire a record of these parameters attached to the data acquired, but changes in flow conditions generally require that these settings be changed to improve data quality. Because of this limitation, on-line modification of the data acquisition parameters can be difficult and time consuming. A computer-controlled signal preprocessor has been developed which makes possible this optimization of the photomultiplier signal as a normal part of the data acquisition process. It allows computer control of the filter selection, signal gain, and photo-multiplier voltage. The raw signal from the photomultiplier tube is input to the preprocessor which, under the control of a digital computer, filters the signal and amplifies it to an acceptable level. The counter-processor used at Lewis Research Center generates the particle interarrival times, as well as the time-of-flight of the particle through the probe volume. The signal preprocessor allows computer control of the acquisition of these data.Through the preprocessor, the computer also can control the hand shaking signals for the interface between itself and the counter-processor. Finally, the signal preprocessor splits the pedestal from the signal before filtering, and monitors the photo-multiplier dc current, sends a signal proportional to this current to the computer through an analog to digital converter, and provides an alarm if the current exceeds a predefined maximum. Complete drawings and explanations are provided in the text as well as a sample interface program for use with the data acquisition software.
Intermittent control: a computational theory of human control.
Gawthrop, Peter; Loram, Ian; Lakie, Martin; Gollee, Henrik
2011-02-01
The paradigm of continuous control using internal models has advanced understanding of human motor control. However, this paradigm ignores some aspects of human control, including intermittent feedback, serial ballistic control, triggered responses and refractory periods. It is shown that event-driven intermittent control provides a framework to explain the behaviour of the human operator under a wider range of conditions than continuous control. Continuous control is included as a special case, but sampling, system matched hold, an intermittent predictor and an event trigger allow serial open-loop trajectories using intermittent feedback. The implementation here may be described as "continuous observation, intermittent action". Beyond explaining unimodal regulation distributions in common with continuous control, these features naturally explain refractoriness and bimodal stabilisation distributions observed in double stimulus tracking experiments and quiet standing, respectively. Moreover, given that human control systems contain significant time delays, a biological-cybernetic rationale favours intermittent over continuous control: intermittent predictive control is computationally less demanding than continuous predictive control. A standard continuous-time predictive control model of the human operator is used as the underlying design method for an event-driven intermittent controller. It is shown that when event thresholds are small and sampling is regular, the intermittent controller can masquerade as the underlying continuous-time controller and thus, under these conditions, the continuous-time and intermittent controller cannot be distinguished. This explains why the intermittent control hypothesis is consistent with the continuous control hypothesis for certain experimental conditions.
Central Fetal Monitoring With and Without Computer Analysis: A Randomized Controlled Trial.
Nunes, Inês; Ayres-de-Campos, Diogo; Ugwumadu, Austin; Amin, Pina; Banfield, Philip; Nicoll, Antony; Cunningham, Simon; Sousa, Paulo; Costa-Santos, Cristina; Bernardes, João
2017-01-01
To evaluate whether intrapartum fetal monitoring with computer analysis and real-time alerts decreases the rate of newborn metabolic acidosis or obstetric intervention when compared with visual analysis. A randomized clinical trial carried out in five hospitals in the United Kingdom evaluated women with singleton, vertex fetuses of 36 weeks of gestation or greater during labor. Continuous central fetal monitoring by computer analysis and online alerts (experimental arm) was compared with visual analysis (control arm). Fetal blood sampling and electrocardiographic ST waveform analysis were available in both arms. The primary outcome was incidence of newborn metabolic acidosis (pH less than 7.05 and base deficit greater than 12 mmol/L). Prespecified secondary outcomes included operative delivery, use of fetal blood sampling, low 5-minute Apgar score, neonatal intensive care unit admission, hypoxic-ischemic encephalopathy, and perinatal death. A sample size of 3,660 per group (N=7,320) was planned to be able to detect a reduction in the rate of metabolic acidosis from 2.8% to 1.8% (two-tailed α of 0.05 with 80% power). From August 2011 through July 2014, 32,306 women were assessed for eligibility and 7,730 were randomized: 3,961 to computer analysis and online alerts, and 3,769 to visual analysis. Baseline characteristics were similar in both groups. Metabolic acidosis occurred in 16 participants (0.40%) in the experimental arm and 22 participants (0.58%) in the control arm (relative risk 0.69 [0.36-1.31]). No statistically significant differences were found in the incidence of secondary outcomes. Compared with visual analysis, computer analysis of fetal monitoring signals with real-time alerts did not significantly reduce the rate of metabolic acidosis or obstetric intervention. A lower-than-expected rate of newborn metabolic acidosis was observed in both arms of the trial. ISRCTN Registry, http://www.isrctn.com, ISRCTN42314164.
Code of Federal Regulations, 2013 CFR
2013-01-01
... in Step (c). (6) For an energy or water consumption standard (ECS), compute the upper control limit (UCL2) for the mean of the combined first and second samples using the DOE ECS as the desired mean and a...)(1). (7) For an energy or water consumption standard (ECS), compare the combined sample mean (x2) to...
Code of Federal Regulations, 2014 CFR
2014-01-01
... in Step (c). (6) For an energy or water consumption standard (ECS), compute the upper control limit (UCL2) for the mean of the combined first and second samples using the DOE ECS as the desired mean and a...)(1). (7) For an energy or water consumption standard (ECS), compare the combined sample mean (x2) to...
Code of Federal Regulations, 2012 CFR
2012-01-01
... in Step (c). (6) For an energy or water consumption standard (ECS), compute the upper control limit (UCL2) for the mean of the combined first and second samples using the DOE ECS as the desired mean and a...)(1). (7) For an energy or water consumption standard (ECS), compare the combined sample mean (x2) to...
Scheduling whole-air samples above the Trade Wind Inversion from SUAS using real-time sensors
NASA Astrophysics Data System (ADS)
Freer, J. E.; Greatwood, C.; Thomas, R.; Richardson, T.; Brownlow, R.; Lowry, D.; MacKenzie, A. R.; Nisbet, E. G.
2015-12-01
Small Unmanned Air Systems (SUAS) are increasingly being used in science applications for a range of applications. Here we explore their use to schedule the sampling of air masses up to 2.5km above ground using computer controlled bespoked Octocopter platforms. Whole-air sampling is targeted above, within and below the Trade Wind Inversion (TWI). On-board sensors profiled the TWI characteristics in real time on ascent and, hence, guided the altitudes at which samples were taken on descent. The science driver for this research is investigation of the Southern Methane Anomaly and, more broadly, the hemispheric-scale transport of long-lived atmospheric tracers in the remote troposphere. Here we focus on the practical application of SUAS for this purpose. Highlighting the need for mission planning, computer control, onboard sensors and logistics in deploying such technologies for out of line-of-sight applications. We show how such a platform can be deployed successfully, resulting in some 60 sampling flights within a 10 day period. Challenges remain regarding the deployment of such platforms routinely and cost-effectively, particularly regarding training and support. We present some initial results from the methane sampling and its implication for exploring and understanding the Southern Methane Anomaly.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
Luján, J L; Crago, P E
2004-11-01
Neuroprosthestic systems can be used to restore hand grasp and wrist control in individuals with C5/C6 spinal cord injury. A computer-based system was developed for the implementation, tuning and clinical assessment of neuroprosthetic controllers, using off-the-shelf hardware and software. The computer system turned a Pentium III PC running Windows NT into a non-dedicated, real-time system for the control of neuroprostheses. Software execution (written using the high-level programming languages LabVIEW and MATLAB) was divided into two phases: training and real-time control. During the training phase, the computer system collected input/output data by stimulating the muscles and measuring the muscle outputs in real-time, analysed the recorded data, generated a set of training data and trained an artificial neural network (ANN)-based controller. During real-time control, the computer system stimulated the muscles using stimulus pulsewidths predicted by the ANN controller in response to a sampled input from an external command source, to provide independent control of hand grasp and wrist posture. System timing was stable, reliable and capable of providing muscle stimulation at frequencies up to 24Hz. To demonstrate the application of the test-bed, an ANN-based controller was implemented with three inputs and two independent channels of stimulation. The ANN controller's ability to control hand grasp and wrist angle independently was assessed by quantitative comparison of the outputs of the stimulated muscles with a set of desired grasp or wrist postures determined by the command signal. Controller performance results were mixed, but the platform provided the tools to implement and assess future controller designs.
NASA Astrophysics Data System (ADS)
Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter
1987-03-01
The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.
Tabe-Bordbar, Shayan; Marashi, Sayed-Amir
2013-12-01
Elementary modes (EMs) are steady-state metabolic flux vectors with minimal set of active reactions. Each EM corresponds to a metabolic pathway. Therefore, studying EMs is helpful for analyzing the production of biotechnologically important metabolites. However, memory requirements for computing EMs may hamper their applicability as, in most genome-scale metabolic models, no EM can be computed due to running out of memory. In this study, we present a method for computing randomly sampled EMs. In this approach, a network reduction algorithm is used for EM computation, which is based on flux balance-based methods. We show that this approach can be used to recover the EMs in the medium- and genome-scale metabolic network models, while the EMs are sampled in an unbiased way. The applicability of such results is shown by computing “estimated” control-effective flux values in Escherichia coli metabolic network.
Tomographic Imaging of Water Injection and Withdrawal in PEMFC Gas Diffusion Layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGill U; Gostick, J. T.; Gunterman, H. P.
2010-06-25
X-ray computed tomography was used to visualize the water configurations inside gas diffusion layers for various applied capillary pressures, corresponding to both water invasion and withdrawal. A specialized sample holder was developed to allow capillary pressure control on the small-scale samples required. Tests were performed on GDL specimens with and without hydrophobic treatments.
A Computer-Controlled Classroom Model of an Atomic Force Microscope
NASA Astrophysics Data System (ADS)
Engstrom, Tyler A.; Johnson, Matthew M.; Eklund, Peter C.; Russin, Timothy J.
2015-12-01
The concept of "seeing by feeling" as a way to circumvent limitations on sight is universal on the macroscopic scale—reading Braille, feeling one's way around a dark room, etc. The development of the atomic force microscope (AFM) in 1986 extended this concept to imaging in the nanoscale. While there are classroom demonstrations that use a tactile probe to map the topography or some other property of a sample, the rastering of the probe over the sample is manually controlled, which is both tedious and potentially inaccurate. Other groups have used simulation or tele-operation of an AFM probe. In this paper we describe a teaching AFM with complete computer control to map out topographic and magnetic properties of a "crystal" consisting of two-dimensional arrays of spherical marble "atoms." Our AFM is well suited for lessons on the "Big Ideas of Nanoscale" such as tools and instrumentation, as well as a pre-teaching activity for groups with remote access AFM or mobile AFM. The principle of operation of our classroom AFM is the same as that of a real AFM, excepting the nature of the force between sample and probe.
Design, implementation and flight testing of PIF autopilots for general aviation aircraft
NASA Technical Reports Server (NTRS)
Broussard, J. R.
1983-01-01
The designs of Proportional-Integrated-Filter (PIF) auto-pilots for a General Aviation (NAVION) aircraft are presented. The PIF autopilot uses the sampled-data regulator and command generator tracking to determine roll select, pitch select, heading select, altitude select and localizer/glideslope capture and hold autopilot modes. The PIF control law uses typical General Aviation sensors for state feedback, command error integration for command tracking, digital complementary filtering and analog prefiltering for sensor noise suppression, a control filter for computation delay accommodation and the incremental form to eliminate trim values in implementation. Theoretical developments described in detail, were needed to combine the sampled-data regulator with command generator tracking for use as a digital flight control system. The digital PIF autopilots are evaluated using closed-loop eigenvalues and linear simulations. The implementation of the PIF autopilots in a digital flight computer using a high order language (FORTRAN) is briefly described. The successful flight test results for each PIF autopilot mode is presented.
An all digital phase locked loop for FM demodulation.
NASA Technical Reports Server (NTRS)
Greco, J.; Garodnick, J.; Schilling, D. L.
1972-01-01
A phase-locked loop designed with all-digital circuitry which avoids certain problems, and a digital voltage controlled oscillator algorithm are described. The system operates synchronously and performs all required digital calculations within one sampling period, thereby performing as a real-time special-purpose computer. The SNR ratio is computed for frequency offsets and sinusoidal modulation, and experimental results verify the theoretical calculations.
ERIC Educational Resources Information Center
Elfeky, Abdellah
2017-01-01
The study aims to examine the impact of social networks of a "Computer in Teaching" course on the achievement and attitudes students at the faculty of education at Najran University. The sample consists of (60) students from the third level in the special education program, (30) students represented the control group whereas the other…
1995-09-15
Large Isothermal Furnace (LIF) was flown on a mission in cooperation with the National Space Development Agency (NASDA) of Japan. LIF is a vacuum-heating furnace designed to heat large samples uniformly. The furnace consists of a sample container and heating element surrounded by a vacuum chamber. A crewmemeber will insert a sample cartridge into the furnace. The furnace will be activated and operations will be controlled automatically by a computer in response to an experiment number entered on the control panel. At the end of operations, helium will be discharged into the furnace, allowing cooling to start. Cooling will occur through the use of a water jacket while rapid cooling of samples can be accomplished through a controlled flow of helium. Data from experiments will help scientists better understand this important process which is vital to the production of high-quality semiconductor crystals.
Direct Synthesis of Microwave Waveforms for Quantum Computing
NASA Astrophysics Data System (ADS)
Raftery, James; Vrajitoarea, Andrei; Zhang, Gengyan; Leng, Zhaoqi; Srinivasan, Srikanth; Houck, Andrew
Current state of the art quantum computing experiments in the microwave regime use control pulses generated by modulating microwave tones with baseband signals generated by an arbitrary waveform generator (AWG). Recent advances in digital analog conversion technology have made it possible to directly synthesize arbitrary microwave pulses with sampling rates of 65 gigasamples per second (GSa/s) or higher. These new ultra-wide bandwidth AWG's could dramatically simplify the classical control chain for quantum computing experiments, presenting potential cost savings and reducing the number of components that need to be carefully calibrated. Here we use a Keysight M8195A AWG to study the viability of such a simplified scheme, demonstrating randomized benchmarking of a superconducting qubit with high fidelity.
NASA Technical Reports Server (NTRS)
Knauber, R. N.
1982-01-01
A FORTRAN IV coded computer program is presented for post-flight analysis of a missile's control surface response. It includes preprocessing of digitized telemetry data for time lags, biases, non-linear calibration changes and filtering. Measurements include autopilot attitude rate and displacement gyro output and four control surface deflections. Simple first order lags are assumed for the pitch, yaw and roll axes of control. Each actuator is also assumed to be represented by a first order lag. Mixing of pitch, yaw and roll commands to four control surfaces is assumed. A pseudo-inverse technique is used to obtain the pitch, yaw and roll components from the four measured deflections. This program has been used for over 10 years on the NASA/SCOUT launch vehicle for post-flight analysis and was helpful in detecting incipient actuator stall due to excessive hinge moments. The program is currently set up for a CDC CYBER 175 computer system. It requires 34K words of memory and contains 675 cards. A sample problem presented herein including the optional plotting requires eleven (11) seconds of central processor time.
Clinical Computing in General Dentistry
Schleyer, Titus K.L.; Thyvalikakath, Thankam P.; Spallek, Heiko; Torres-Urquidy, Miguel H.; Hernandez, Pedro; Yuhaniak, Jeannie
2006-01-01
Objective: Measure the adoption and utilization of, opinions about, and attitudes toward clinical computing among general dentists in the United States. Design: Telephone survey of a random sample of 256 general dentists in active practice in the United States. Measurements: A 39-item telephone interview measuring practice characteristics and information technology infrastructure; clinical information storage; data entry and access; attitudes toward and opinions about clinical computing (features of practice management systems, barriers, advantages, disadvantages, and potential improvements); clinical Internet use; and attitudes toward the National Health Information Infrastructure. Results: The authors successfully screened 1,039 of 1,159 randomly sampled U.S. general dentists in active practice (89.6% response rate). Two hundred fifty-six (24.6%) respondents had computers at chairside and thus were eligible for this study. The authors successfully interviewed 102 respondents (39.8%). Clinical information associated with administration and billing, such as appointments and treatment plans, was stored predominantly on the computer; other information, such as the medical history and progress notes, primarily resided on paper. Nineteen respondents, or 1.8% of all general dentists, were completely paperless. Auxiliary personnel, such as dental assistants and hygienists, entered most data. Respondents adopted clinical computing to improve office efficiency and operations, support diagnosis and treatment, and enhance patient communication and perception. Barriers included insufficient operational reliability, program limitations, a steep learning curve, cost, and infection control issues. Conclusion: Clinical computing is being increasingly adopted in general dentistry. However, future research must address usefulness and ease of use, workflow support, infection control, integration, and implementation issues. PMID:16501177
NASA Astrophysics Data System (ADS)
Schlicker, Lukas; Doran, Andrew; Schneppmüller, Peter; Gili, Albert; Czasny, Mathias; Penner, Simon; Gurlo, Aleksander
2018-03-01
This work describes a device for time-resolved synchrotron-based in situ and operando X-ray powder diffraction measurements at elevated temperatures under controllable gaseous environments. The respective gaseous sample environment is realized via a gas-tight capillary-in-capillary design, where the gas flow is achieved through an open-end 0.5 mm capillary located inside a 0.7 mm capillary filled with a sample powder. Thermal mass flow controllers provide appropriate gas flows and computer-controlled on-the-fly gas mixing capabilities. The capillary system is centered inside an infrared heated, proportional integral differential-controlled capillary furnace allowing access to temperatures up to 1000 °C.
Microcomputer data acquisition and control.
East, T D
1986-01-01
In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.
Watanabe, S; Tanaka, M; Wada, Y; Suzuki, H; Takagi, S; Mori, S; Fukai, K; Kanazawa, Y; Takagi, M; Hirakawa, K; Ogasawara, K; Tsumura, K; Ogawa, K; Matsumoto, K; Nagaoka, S; Suzuki, T; Shimura, D; Yamashita, M; Nishio, S
1994-07-01
The telescience testbed experiments were carried out to test and investigate the tele-manipulation techniques in the intracellular potential recording of amphibian eggs. Implementation of telescience testbed was set up in the two separated laboratories of the Tsukuba Space center of NASDA, which were connected by tele-communication links. Manipulators respective for a microelectrode and a sample stage of microscope were moved by computers, of which command signals were transmitted from a computer in a remote control room. The computer in the control room was operated by an investigator (PI) who controlled the movement of each manipulator remotely. A stereoscopic vision of the microscope image were prepared by using a head mounted display (HMD) and were indispensable to the intracellular single cell recording. The fertilization potential of amphibian eggs was successfully obtained through the remote operating system.
The comparison of various approach to evaluation erosion risks and design control erosion measures
NASA Astrophysics Data System (ADS)
Kapicka, Jiri
2015-04-01
In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.
Higo, Junichi; Dasgupta, Bhaskar; Mashimo, Tadaaki; Kasahara, Kota; Fukunishi, Yoshifumi; Nakamura, Haruki
2015-07-30
A novel enhanced conformational sampling method, virtual-system-coupled adaptive umbrella sampling (V-AUS), was proposed to compute 300-K free-energy landscape for flexible molecular docking, where a virtual degrees of freedom was introduced to control the sampling. This degree of freedom interacts with the biomolecular system. V-AUS was applied to complex formation of two disordered amyloid-β (Aβ30-35 ) peptides in a periodic box filled by an explicit solvent. An interpeptide distance was defined as the reaction coordinate, along which sampling was enhanced. A uniform conformational distribution was obtained covering a wide interpeptide distance ranging from the bound to unbound states. The 300-K free-energy landscape was characterized by thermodynamically stable basins of antiparallel and parallel β-sheet complexes and some other complex forms. Helices were frequently observed, when the two peptides contacted loosely or fluctuated freely without interpeptide contacts. We observed that V-AUS converged to uniform distribution more effectively than conventional AUS sampling did. © 2015 Wiley Periodicals, Inc.
System identification from closed-loop data with known output feedback dynamics
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Horta, Lucas G.; Longman, Richard W.
1992-01-01
This paper presents a procedure to identify the open loop systems when it is operating under closed loop conditions. First, closed loop excitation data are used to compute the system open loop and closed loop Markov parameters. The Markov parameters, which are the pulse response samples, are then used to compute a state space representation of the open loop system. Two closed loop configurations are considered in this paper. The closed loop system can have either a linear output feedback controller or a dynamic output feedback controller. Numerical examples are provided to illustrate the proposed closed loop identification method.
Linder, Nina; Turkki, Riku; Walliander, Margarita; Mårtensson, Andreas; Diwan, Vinod; Rahtu, Esa; Pietikäinen, Matti; Lundin, Mikael; Lundin, Johan
2014-01-01
Microscopy is the gold standard for diagnosis of malaria, however, manual evaluation of blood films is highly dependent on skilled personnel in a time-consuming, error-prone and repetitive process. In this study we propose a method using computer vision detection and visualization of only the diagnostically most relevant sample regions in digitized blood smears. Giemsa-stained thin blood films with P. falciparum ring-stage trophozoites (n = 27) and uninfected controls (n = 20) were digitally scanned with an oil immersion objective (0.1 µm/pixel) to capture approximately 50,000 erythrocytes per sample. Parasite candidate regions were identified based on color and object size, followed by extraction of image features (local binary patterns, local contrast and Scale-invariant feature transform descriptors) used as input to a support vector machine classifier. The classifier was trained on digital slides from ten patients and validated on six samples. The diagnostic accuracy was tested on 31 samples (19 infected and 12 controls). From each digitized area of a blood smear, a panel with the 128 most probable parasite candidate regions was generated. Two expert microscopists were asked to visually inspect the panel on a tablet computer and to judge whether the patient was infected with P. falciparum. The method achieved a diagnostic sensitivity and specificity of 95% and 100% as well as 90% and 100% for the two readers respectively using the diagnostic tool. Parasitemia was separately calculated by the automated system and the correlation coefficient between manual and automated parasitemia counts was 0.97. We developed a decision support system for detecting malaria parasites using a computer vision algorithm combined with visualization of sample areas with the highest probability of malaria infection. The system provides a novel method for blood smear screening with a significantly reduced need for visual examination and has a potential to increase the throughput in malaria diagnostics.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
Byeon, Sang-Hoon; Willis, Robert; Peters, Thomas M
2015-02-13
Outdoor and indoor (subway) samples were collected by passive sampling in urban Seoul (Korea) and analyzed with computer-controlled scanning electron microscopy coupled with energy dispersive x-ray spectroscopy (CCSEM-EDX). Soil/road dust particles accounted for 42%-60% (by weight) of fine particulate matter larger than 1 µm (PM(2.5-1.0)) in outdoor samples and 18% of PM2.5-1.0 in subway samples. Iron-containing particles accounted for only 3%-6% in outdoor samples but 69% in subway samples. Qualitatively similar results were found for coarse particulate matter (PM(10-2.5)) with soil/road dust particles dominating outdoor samples (66%-83%) and iron-containing particles contributing most to subway PM(10-2.5) (44%). As expected, soil/road dust particles comprised a greater mass fraction of PM(10-2.5) than PM(2.5-1.0). Also as expected, the mass fraction of iron-containing particles was substantially less in PM(10-2.5) than in PM(2.5-1.0). Results of this study are consistent with known emission sources in the area and with previous studies, which showed high concentrations of iron-containing particles in the subway compared to outdoor sites. Thus, passive sampling with CCSEM-EDX offers an inexpensive means to assess PM(2.5-1.0) and PM(10-2.5) simultaneously and by composition at multiple locations.
Nearly Interactive Parabolized Navier-Stokes Solver for High Speed Forebody and Inlet Flows
NASA Technical Reports Server (NTRS)
Benson, Thomas J.; Liou, May-Fun; Jones, William H.; Trefny, Charles J.
2009-01-01
A system of computer programs is being developed for the preliminary design of high speed inlets and forebodies. The system comprises four functions: geometry definition, flow grid generation, flow solver, and graphics post-processor. The system runs on a dedicated personal computer using the Windows operating system and is controlled by graphical user interfaces written in MATLAB (The Mathworks, Inc.). The flow solver uses the Parabolized Navier-Stokes equations to compute millions of mesh points in several minutes. Sample two-dimensional and three-dimensional calculations are demonstrated in the paper.
Experiments with a small behaviour controlled planetary rover
NASA Technical Reports Server (NTRS)
Miller, David P.; Desai, Rajiv S.; Gat, Erann; Ivlev, Robert; Loch, John
1993-01-01
A series of experiments that were performed on the Rocky 3 robot is described. Rocky 3 is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behavior control paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky 3 runs on an eight bit processor and uses about ten k of memory.
Ruesch, Rodney; Jenkins, Philip N.; Ma, Nan
2004-03-09
There is disclosed apparatus and apparatus for impedance control to provide for controlling the impedance of a communication circuit using an all-digital impedance control circuit wherein one or more control bits are used to tune the output impedance. In one example embodiment, the impedance control circuit is fabricated using circuit components found in a standard macro library of a computer aided design system. According to another example embodiment, there is provided a control for an output driver on an integrated circuit ("IC") device to provide for forming a resistor divider network with the output driver and a resistor off the IC device so that the divider network produces an output voltage, comparing the output voltage of the divider network with a reference voltage, and adjusting the output impedance of the output driver to attempt to match the output voltage of the divider network and the reference voltage. Also disclosed is over-sampling the divider network voltage, storing the results of the over sampling, repeating the over-sampling and storing, averaging the results of multiple over sampling operations, controlling the impedance with a plurality of bits forming a word, and updating the value of the word by only one least significant bit at a time.
Compact Microscope Imaging System with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.
Use of television, videogames, and computer among children and adolescents in Italy.
Patriarca, Alessandro; Di Giuseppe, Gabriella; Albano, Luciana; Marinelli, Paolo; Angelillo, Italo F
2009-05-13
This survey determined the practices about television (video inclusive), videogames, and computer use in children and adolescents in Italy. A self-administered anonymous questionnaire covered socio-demographics; behaviour about television, videogames, computer, and sports; parental control over television, videogames, and computer. Overall, 54.1% and 61% always ate lunch or dinner in front of the television, 89.5% had a television in the bedroom while 52.5% of them always watched television there, and 49% indicated that parents controlled the content of what was watched on television. The overall mean length of time daily spent on television viewing (2.8 hours) and the frequency of watching for at least two hours per day (74.9%) were significantly associated with older age, always ate lunch or dinner while watching television, spent more time playing videogames and using computer. Those with parents from a lower socio-economic level were also more likely to spend more minutes viewing television. Two-thirds played videogames for 1.6 daily hours and more time was spent by those younger, males, with parents that do not control them, who watched more television, and who spent more time at the computer. The computer was used by 85% of the sample for 1.6 daily hours and those older, with a computer in the bedroom, with a higher number of computers in home, who view more television and play videogames were more likely to use the computer. Immediate and comprehensive actions are needed in order to diminish time spent at the television, videogames, and computer.
Use of television, videogames, and computer among children and adolescents in Italy
Patriarca, Alessandro; Di Giuseppe, Gabriella; Albano, Luciana; Marinelli, Paolo; Angelillo, Italo F
2009-01-01
Background This survey determined the practices about television (video inclusive), videogames, and computer use in children and adolescents in Italy. Methods A self-administered anonymous questionnaire covered socio-demographics; behaviour about television, videogames, computer, and sports; parental control over television, videogames, and computer. Results Overall, 54.1% and 61% always ate lunch or dinner in front of the television, 89.5% had a television in the bedroom while 52.5% of them always watched television there, and 49% indicated that parents controlled the content of what was watched on television. The overall mean length of time daily spent on television viewing (2.8 hours) and the frequency of watching for at least two hours per day (74.9%) were significantly associated with older age, always ate lunch or dinner while watching television, spent more time playing videogames and using computer. Those with parents from a lower socio-economic level were also more likely to spend more minutes viewing television. Two-thirds played videogames for 1.6 daily hours and more time was spent by those younger, males, with parents that do not control them, who watched more television, and who spent more time at the computer. The computer was used by 85% of the sample for 1.6 daily hours and those older, with a computer in the bedroom, with a higher number of computers in home, who view more television and play videogames were more likely to use the computer. Conclusion Immediate and comprehensive actions are needed in order to diminish time spent at the television, videogames, and computer. PMID:19439070
Automated serum chloride analysis using the Apple computer
Taylor, Paul J.; Bouska, Rosalie A.
1988-01-01
Chloride analysis employing a coulometric technique is a wellestablished method. However, the equipment needed is specialized and somewhat expensive. The purpose of this paper is to report the development of the hardware and software to perform this analysis using an Apple computer to control the coulometric titration, as well as to automate it and to print out the results. The Apple computer is used to control the flow of current in a circuit, which includes silver and platinum electrodes where the following reactions take place: A g → A g + + l e − ( at silver anode ) 2 H 2 O + 2 e − → 2 O H − + H 2 ( at platinum cathode ) The generated silver ions then react with the chloride ion in the sample to form AgCl. A g + + C l − → A g C l ( s ) When all of the chloride ion has been titrated, the concentration of silver ions in solution increases rapidly, which causes an increase in the current between two silver microelectrodes. This current is converted to a voltage and amplified by a simple circuit. This voltage is read by the analogue-to-digital converter. The computer stops the titration and calculates the chloride ion content of the sample. Thus, the computer controls the apparatus, records the data, and reacts to the data to terminate the analyses and prints out the results and messages to the analyst. Analysis of standards and reference sera indicate the method is rapid, accurate and precise. Application of this apparatus as a teaching aidfor electronics to chemistry and medical students is also described. PMID:18925182
Basic Research in Digital Stochastic Model Algorithmic Control.
1980-11-01
IDCOM Description 115 8.2 Basic Control Computation 117 8.3 Gradient Algorithm 119 8.4 Simulation Model 119 8.5 Model Modifications 123 8.6 Summary 124...constraints, and 3) control traJectorv comouta- tion. 2.1.1 Internal Model of the System The multivariable system to be controlled is represented by a...more flexible and adaptive, since the model , criteria, and sampling rates can be adjusted on-line. This flexibility comes from the use of the impulse
Portnoy, David B.; Scott-Sheldon, Lori A. J.; Johnson, Blair T.; Carey, Michael P.
2008-01-01
Objective Use of computers to promote healthy behavior is increasing. To evaluate the efficacy of these computer-delivered interventions, we conducted a meta-analysis of the published literature. Method Studies examining health domains related to the leading health indicators outlined in Healthy People 2010 were selected. Data from 75 randomized controlled trials, published between 1988 and 2007, with 35,685 participants and 82 separate interventions were included. All studies were coded independently by two raters for study and participant characteristics, design and methodology, and intervention content. We calculated weighted mean effect sizes for theoretically-meaningful psychosocial and behavioral outcomes; moderator analyses determined the relation between study characteristics and the magnitude of effect sizes for heterogeneous outcomes. Results Compared with controls, participants who received a computer-delivered intervention improved several hypothesized antecedents of health behavior (knowledge, attitudes, intentions); intervention recipients also improved health behaviors (nutrition, tobacco use, substance use, safer sexual behavior, binge/purge behaviors) and general health maintenance. Several sample, study and intervention characteristics moderated the psychosocial and behavioral outcomes. Conclusion Computer-delivered interventions can lead to improved behavioral health outcomes at first post-intervention assessment. Interventions evaluating outcomes at extended assessment periods are needed to evaluate the longer-term efficacy of computer-delivered interventions. PMID:18403003
Aranda-Escolástico, Ernesto; Guinaldo, María; Gordillo, Francisco; Dormido, Sebastián
2016-11-01
In this paper, periodic event-triggered controllers are proposed for the rotary inverted pendulum. The control strategy is divided in two steps: swing-up and stabilization. In both cases, the system is sampled periodically but the control actions are only computed at certain instances of time (based on events), which are a subset of the sampling times. For the stabilization control, the asymptotic stability is guaranteed applying the Lyapunov-Razumikhin theorem for systems with delays. This result is applicable to general linear systems and not only to the inverted pendulum. For the swing-up control, a trigger function is provided from the derivative of the Lyapunov function for the swing-up control law. Experimental results show a significant improvement with respect to periodic control in the number of control actions. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Cutaway line drawing of STS-34 middeck experiment Polymer Morphology (PM)
NASA Technical Reports Server (NTRS)
1989-01-01
Cutaway line drawing shows components of STS-34 middeck experiment Polymer Morphology (PM). Generic Electronics Module (GEM) components include the control housing, circulating fans, hard disk, tape drives, computer boards, and heat exchanger. PM, a 3M-developed organic materials processing experiment, is designed to explore the effects of microgravity on polymeric materials as they are processed in space. The samples of polymeric materials being studied in the PM experiment are thin films (25 microns or less) approximately 25mm in diameter. The samples are mounted between two infrared transparent windows in a specially designed infrared cell that provides the capability of thermally processing the samples to 200 degrees Celsius with a high degree of thermal control. The samples are mounted on a carousel that allows them to be positioned, one at a time, in the infrared beam where spectra may be acquired. The GEM provides all carousel and sample cell control (SCC). The first flight of P
NASA Astrophysics Data System (ADS)
Kim, Kyoohyun; Park, Yongkeun
2017-05-01
Optical trapping can manipulate the three-dimensional (3D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and extensive computations. Here, we achieve the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3D refractive index distribution of samples. Engineering the 3D light field distribution of a trapping beam based on the measured 3D refractive index map of samples generates a light mould, which can manipulate colloidal and biological samples with arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can be directly applied in biophotonics and soft matter physics.
An application of high authority/low authority control and positivity
NASA Technical Reports Server (NTRS)
Seltzer, S. M.; Irwin, D.; Tollison, D.; Waites, H. B.
1988-01-01
Control Dynamics Company (CDy), in conjunction with NASA Marshall Space Flight Center (MSFC), has supported the U.S. Air Force Wright Aeronautical Laboratory (AFWAL) in conducting an investigation of the implementation of several DOD controls techniques. These techniques are to provide vibration suppression and precise attitude control for flexible space structures. AFWAL issued a contract to Control Dynamics to perform this work under the Active Control Technique Evaluation for Spacecraft (ACES) Program. The High Authority Control/Low Authority Control (HAC/LAC) and Positivity controls techniques, which were cultivated under the DARPA Active Control of Space Structures (ACOSS) Program, were applied to a structural model of the NASA/MSFC Ground Test Facility ACES configuration. The control systems design were accomplished and linear post-analyses of the closed-loop systems are provided. The control system designs take into account effects of sampling and delay in the control computer. Nonlinear simulation runs were used to verify the control system designs and implementations in the facility control computers. Finally, test results are given to verify operations of the control systems in the test facility.
ERIC Educational Resources Information Center
Oliveira, Marileide; Goyos, Celso; Pear, Joseph
2012-01-01
Matching-to-sample (MTS) training consists of presenting a stimulus as a sample followed by stimuli called comparisons from which a subject makes a choice. This study presents results of a pilot investigation comparing two packages for teaching university students to conduct MTS training. Two groups--control and experimental--with 2 participants…
Specimen coordinate automated measuring machine/fiducial automated measuring machine
Hedglen, Robert E.; Jacket, Howard S.; Schwartz, Allan I.
1991-01-01
The Specimen coordinate Automated Measuring Machine (SCAMM) and the Fiducial Automated Measuring Machine (FAMM) is a computer controlled metrology system capable of measuring length, width, and thickness, and of locating fiducial marks. SCAMM and FAMM have many similarities in their designs, and they can be converted from one to the other without taking them out of the hot cell. Both have means for: supporting a plurality of samples and a standard; controlling the movement of the samples in the +/- X and Y directions; determining the coordinates of the sample; compensating for temperature effects; and verifying the accuracy of the measurements and repeating as necessary. SCAMM and FAMM are designed to be used in hot cells.
Design on the x-ray oral digital image display card
NASA Astrophysics Data System (ADS)
Wang, Liping; Gu, Guohua; Chen, Qian
2009-10-01
According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1980-01-01
A method for generating two dimensional finite difference grids about airfoils and other shapes by the use of the Poisson differential equation is developed. The inhomogeneous terms are automatically chosen such that two important effects are imposed on the grid at both the inner and outer boundaries. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. A FORTRAN computer program has been written to use this method. A description of the program, a discussion of the control parameters, and a set of sample cases are included.
2010-08-01
a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables
MICROPROCESSOR-BASED DATA-ACQUISITION SYSTEM FOR A BOREHOLE RADAR.
Bradley, Jerry A.; Wright, David L.
1987-01-01
An efficient microprocessor-based system is described that permits real-time acquisition, stacking, and digital recording of data generated by a borehole radar system. Although the system digitizes, stacks, and records independently of a computer, it is interfaced to a desktop computer for program control over system parameters such as sampling interval, number of samples, number of times the data are stacked prior to recording on nine-track tape, and for graphics display of the digitized data. The data can be transferred to the desktop computer during recording, or it can be played back from a tape at a latter time. Using the desktop computer, the operator observes results while recording data and generates hard-copy graphics in the field. Thus, the radar operator can immediately evaluate the quality of data being obtained, modify system parameters, study the radar logs before leaving the field, and rerun borehole logs if necessary. The system has proven to be reliable in the field and has increased productivity both in the field and in the laboratory.
Steinberg, David M.; Fine, Jason; Chappell, Rick
2009-01-01
Important properties of diagnostic methods are their sensitivity, specificity, and positive and negative predictive values (PPV and NPV). These methods are typically assessed via case–control samples, which include one cohort of cases known to have the disease and a second control cohort of disease-free subjects. Such studies give direct estimates of sensitivity and specificity but only indirect estimates of PPV and NPV, which also depend on the disease prevalence in the tested population. The motivating example arises in assay testing, where usage is contemplated in populations with known prevalences. Further instances include biomarker development, where subjects are selected from a population with known prevalence and assessment of PPV and NPV is crucial, and the assessment of diagnostic imaging procedures for rare diseases, where case–control studies may be the only feasible designs. We develop formulas for optimal allocation of the sample between the case and control cohorts and for computing sample size when the goal of the study is to prove that the test procedure exceeds pre-stated bounds for PPV and/or NPV. Surprisingly, the optimal sampling schemes for many purposes are highly unbalanced, even when information is desired on both PPV and NPV. PMID:18556677
Data-Based Predictive Control with Multirate Prediction Step
NASA Technical Reports Server (NTRS)
Barlow, Jonathan S.
2010-01-01
Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.
NASA Astrophysics Data System (ADS)
Stockert, Sven; Wehr, Matthias; Lohmar, Johannes; Abel, Dirk; Hirt, Gerhard
2017-10-01
In the electrical and medical industries the trend towards further miniaturization of devices is accompanied by the demand for smaller manufacturing tolerances. Such industries use a plentitude of small and narrow cold rolled metal strips with high thickness accuracy. Conventional rolling mills can hardly achieve further improvement of these tolerances. However, a model-based controller in combination with an additional piezoelectric actuator for high dynamic roll adjustment is expected to enable the production of the required metal strips with a thickness tolerance of +/-1 µm. The model-based controller has to be based on a rolling theory which can describe the rolling process very accurately. Additionally, the required computing time has to be low in order to predict the rolling process in real-time. In this work, four rolling theories from literature with different levels of complexity are tested for their suitability for the predictive controller. Rolling theories of von Kármán, Siebel, Bland & Ford and Alexander are implemented in Matlab and afterwards transferred to the real-time computer used for the controller. The prediction accuracy of these theories is validated using rolling trials with different thickness reduction and a comparison to the calculated results. Furthermore, the required computing time on the real-time computer is measured. Adequate results according the prediction accuracy can be achieved with the rolling theories developed by Bland & Ford and Alexander. A comparison of the computing time of those two theories reveals that Alexander's theory exceeds the sample rate of 1 kHz of the real-time computer.
Development of Automatic Control of Bayer Plant Digestion
NASA Astrophysics Data System (ADS)
Riffaud, J. P.
Supervisory computer control has been achieved in Alcan's Bayer Plants at Arvida, Quebec, Canada. The purpose of the automatic control system is to stabilize and consequently increase, the alumina/caustic ratio within the digester train and in the blow-off liquor. Measurements of the electrical conductivity of the liquor are obtained from electrodeless conductivity meters. These signals, along with several others are scanned by the computer and converted to engineering units, using specific relationships which are updated periodically for calibration purposes. On regular time intervals, values of ratio are compared to target values and adjustments are made to the bauxite flow entering the digesters. Dead time compensation included in the control algorithm enables a faster rate for corrections. Modification of production rate is achieved through careful timing of various flow changes. Calibration of the conductivity meters is achieved by sampling at intervals the liquor flowing through them, and analysing it with a thermometric titrator. Calibration of the thermometric titrator is done at intervals with a standard solution. Calculations for both calibrations are performed by computer from data entered by the analyst. The computer was used for on-line data collection, modelling of the digester system, calculation of disturbances and simulation of control strategies before implementing the most successful strategy in the Plant. Control of ratio has been improved by the integrated system, resulting in increased Plant productivity.
Printable, scannable biometric templates for secure documents and materials
NASA Astrophysics Data System (ADS)
Cambier, James L.; Musgrave, Clyde
2000-04-01
Biometric technology has been widely acknowledged as an effective means for enhancing private and public security through applications in physical access control, computer and computer network access control, medical records protection, banking security, public identification programs, and others. Nearly all of these applications involve use of a biometric token to control access to a physical entity or private information. There are also unique benefits to be derived from attaching a biometric template to a physical entity such as a document, package, laboratory sample, etc. Such an association allows fast, reliable, and highly accurate association of an individual person's identity to the physical entity, and can be used to enhance security, convenience, and privacy in many types of transactions. Examples include authentication of documents, tracking of laboratory samples in a testing environment, monitoring the movement of physical evidence within the criminal justice system, and authenticating the identity of both sending and receiving parties in shipment of high value parcels. A system is described which combines a biometric technology based on iris recognition with a printing and scanning technology for high-density bar codes.
The focal plane reception pattern calculation for a paraboloidal antenna with a nearby fence
NASA Technical Reports Server (NTRS)
Schmidt, Richard F.; Cheng, Hwai-Soon; Kao, Michael W.
1987-01-01
A computer simulation program is described which is used to estimate the effects of a proximate diffraction fence on the performance of paraboloid antennas. The computer program is written in FORTRAN. The physical problem, mathematical formulation and coordinate references are described. The main control structure of the program and the function of the individual subroutines are discussed. The Job Control Language set-up and program instruction are provided in the user's instruction to help users execute the present program. A sample problem with an appropriate output listing is made available as an illustration of the usage of the program.
A FORTRAN program for determining aircraft stability and control derivatives from flight data
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1975-01-01
A digital computer program written in FORTRAN IV for the estimation of aircraft stability and control derivatives is presented. The program uses a maximum likelihood estimation method, and two associated programs for routine, related data handling are also included. The three programs form a package that can be used by relatively inexperienced personnel to process large amounts of data with a minimum of manpower. This package was used to successfully analyze 1500 maneuvers on 20 aircraft, and is designed to be used without modification on as many types of computers as feasible. Program listings and sample check cases are included.
System design of the annular suspension and pointing system /ASPS/
NASA Technical Reports Server (NTRS)
Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.
1978-01-01
This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.
Brooker, Simon; Kabatereine, Narcis B.; Myatt, Mark; Stothard, J. Russell; Fenwick, Alan
2007-01-01
Summary Rapid and accurate identification of communities at highest risk of morbidity from schistosomiasis is key for sustainable control. Although school questionnaires can effectively and inexpensively identify communities with a high prevalence of Schistosoma haematobium, parasitological screening remains the preferred option for S. mansoni. To help reduce screening costs, we investigated the validity of Lot Quality Assurance Sampling (LQAS) in classifying schools according categories of S. mansoni prevalence in Uganda, and explored its applicability and cost-effectiveness. First, we evaluated several sampling plans using computer simulation and then field tested one sampling plan in 34 schools in Uganda. Finally, cost-effectiveness of different screening and control strategies (including mass treatment without prior screening) was determined, and sensitivity analysis undertaken to assess the effect of infection levels and treatment costs. In identifying schools with prevalence ≥50%, computer simulations showed that LQAS had high levels of sensitivity and specificity (>90%) at sample sizes <20. The method also provides an ability to classify communities into three prevalence categories. Field testing showed that LQAS where 15 children were sampled had excellent diagnostic performance (sensitivity: 100%, specificity: 96.4%, positive predictive value: 85.7% and negative predictive value: 92.3%). Screening using LQAS was more cost-effective than mass treating all schools (US$ 218 vs. US$ 482 / high prevalence school treated). Threshold analysis indicated that parasitological screening and mass treatment would become equivalent for settings where prevalence exceeds 50% in 75% of schools and for treatment costs of US$ 0.19 per schoolchild. We conclude that, in Uganda, LQAS provides a rapid, valid, and cost-effective method for guiding decision makers in allocating finite resources for the control of schistosomiasis. PMID:15960703
Brooker, Simon; Kabatereine, Narcis B; Myatt, Mark; Russell Stothard, J; Fenwick, Alan
2005-07-01
Rapid and accurate identification of communities at highest risk of morbidity from schistosomiasis is key for sustainable control. Although school questionnaires can effectively and inexpensively identify communities with a high prevalence of Schistosoma haematobium, parasitological screening remains the preferred option for S. mansoni. To help reduce screening costs, we investigated the validity of Lot Quality Assurance Sampling (LQAS) in classifying schools according to categories of S. mansoni prevalence in Uganda, and explored its applicability and cost-effectiveness. First, we evaluated several sampling plans using computer simulation and then field tested one sampling plan in 34 schools in Uganda. Finally, cost-effectiveness of different screening and control strategies (including mass treatment without prior screening) was determined, and sensitivity analysis undertaken to assess the effect of infection levels and treatment costs. In identifying schools with prevalences > or =50%, computer simulations showed that LQAS had high levels of sensitivity and specificity (>90%) at sample sizes <20. The method also provides an ability to classify communities into three prevalence categories. Field testing showed that LQAS where 15 children were sampled had excellent diagnostic performance (sensitivity: 100%, specificity: 96.4%, positive predictive value: 85.7% and negative predictive value: 92.3%). Screening using LQAS was more cost-effective than mass treating all schools (US$218 vs. US$482/high prevalence school treated). Threshold analysis indicated that parasitological screening and mass treatment would become equivalent for settings where prevalence > or =50% in 75% of schools and for treatment costs of US$0.19 per schoolchild. We conclude that, in Uganda, LQAS provides a rapid, valid and cost-effective method for guiding decision makers in allocating finite resources for the control of schistosomiasis.
Automated standardization technique for an inductively-coupled plasma emission spectrometer
Garbarino, John R.; Taylor, Howard E.
1982-01-01
The manifold assembly subsystem described permits real-time computer-controlled standardization and quality control of a commercial inductively-coupled plasma atomic emission spectrometer. The manifold assembly consists of a branch-structured glass manifold, a series of microcomputer-controlled solenoid valves, and a reservoir for each standard. Automated standardization involves selective actuation of each solenoid valve that permits a specific mixed standard solution to be pumped to the nebulizer of the spectrometer. Quality control is based on the evaluation of results obtained for a mixed standard containing 17 analytes, that is measured periodically with unknown samples. An inaccurate standard evaluation triggers restandardization of the instrument according to a predetermined protocol. Interaction of the computer-controlled manifold assembly hardware with the spectrometer system is outlined. Evaluation of the automated standardization system with respect to reliability, simplicity, flexibility, and efficiency is compared to the manual procedure. ?? 1982.
Advanced control schemes and kinematic analysis for a kinematically redundant 7 DOF manipulator
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1990-01-01
The kinematic analysis and control of a kinematically redundant manipulator is addressed. The manipulator is the slave arm of a telerobot system recently built at Goddard Space Flight Center (GSFC) to serve as a testbed for investigating research issues in telerobotics. A forward kinematic transformation is developed in its most simplified form, suitable for real-time control applications, and the manipulator Jacobian is derived using the vector cross product method. Using the developed forward kinematic transformation and quaternion representation of orientation matrices, we perform computer simulation to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of Jacobian pseudo-inverse for various sampling times. The equivalence between Cartesian velocities and quaternion is also verified using computer simulation. Three control schemes are proposed and discussed for controlling the motion of the slave arm end-effector.
ALMA Correlator Real-Time Data Processor
NASA Astrophysics Data System (ADS)
Pisano, J.; Amestica, R.; Perez, J.
2005-10-01
The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.
Local synchronization of chaotic neural networks with sampled-data and saturating actuators.
Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian
2014-12-01
This paper investigates the problem of local synchronization of chaotic neural networks with sampled-data and actuator saturation. A new time-dependent Lyapunov functional is proposed for the synchronization error systems. The advantage of the constructed Lyapunov functional lies in the fact that it is positive definite at sampling times but not necessarily between sampling times, and makes full use of the available information about the actual sampling pattern. A local stability condition of the synchronization error systems is derived, based on which a sampled-data controller with respect to the actuator saturation is designed to ensure that the master neural networks and slave neural networks are locally asymptotically synchronous. Two optimization problems are provided to compute the desired sampled-data controller with the aim of enlarging the set of admissible initial conditions or the admissible sampling upper bound ensuring the local synchronization of the considered chaotic neural networks. A numerical example is used to demonstrate the effectiveness of the proposed design technique.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Conic Sector Analysis of Hybrid Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Thompson, P. M.
1982-01-01
A hybrid control system contains an analog plant and a hybrid (or sampled-data) compensator. In this thesis a new conic sector is determined which is constructive and can be used to: (1) determine closed loop stability, (2) analyze robustness with respect to modelling uncertainties, (3) analyze steady state response to commands, and (4) select the sample rate. The use of conic sectors allows the designer to treat hybrid control systems as though they were analog control systems. The center of the conic sector can be used as a rigorous linear time invariant approximation of the hybrid control system, and the radius places a bound on the errors of this approximation. The hybrid feedback system can be multivariable, and the sampler is assumed to be synchronous. Algorithms to compute the conic sector are presented. Several examples demonstrate how the conic sector analysis techniques are applied. Extensions to single loop multirate hybrid feedback systems are presented. Further extensions are proposed for multiloop multirate hybrid feedback system and for single rate systems with asynchronous sampling.
Byeon, Sang-Hoon; Willis, Robert; Peters, Thomas M.
2015-01-01
Outdoor and indoor (subway) samples were collected by passive sampling in urban Seoul (Korea) and analyzed with computer-controlled scanning electron microscopy coupled with energy dispersive x-ray spectroscopy (CCSEM-EDX). Soil/road dust particles accounted for 42%–60% (by weight) of fine particulate matter larger than 1 µm (PM2.5–1.0) in outdoor samples and 18% of PM2.5–1.0 in subway samples. Iron-containing particles accounted for only 3%–6% in outdoor samples but 69% in subway samples. Qualitatively similar results were found for coarse particulate matter (PM10–2.5) with soil/road dust particles dominating outdoor samples (66%–83%) and iron-containing particles contributing most to subway PM10–2.5 (44%). As expected, soil/road dust particles comprised a greater mass fraction of PM10–2.5 than PM2.5–1.0. Also as expected, the mass fraction of iron-containing particles was substantially less in PM10–2.5 than in PM2.5–1.0. Results of this study are consistent with known emission sources in the area and with previous studies, which showed high concentrations of iron-containing particles in the subway compared to outdoor sites. Thus, passive sampling with CCSEM-EDX offers an inexpensive means to assess PM2.5–1.0 and PM10-2.5 simultaneously and by composition at multiple locations. PMID:25689348
Method and automated apparatus for detecting coliform organisms
NASA Technical Reports Server (NTRS)
Dill, W. P.; Taylor, R. E.; Jeffers, E. L. (Inventor)
1980-01-01
Method and automated apparatus are disclosed for determining the time of detection of metabolically produced hydrogen by coliform bacteria cultured in an electroanalytical cell from the time the cell is inoculated with the bacteria. The detection time data provides bacteria concentration values. The apparatus is sequenced and controlled by a digital computer to discharge a spent sample, clean and sterilize the culture cell, provide a bacteria nutrient into the cell, control the temperature of the nutrient, inoculate the nutrient with a bacteria sample, measures the electrical potential difference produced by the cell, and measures the time of detection from inoculation.
Using computer assisted learning for clinical skills education in nursing: integrative review.
Bloomfield, Jacqueline G; While, Alison E; Roberts, Julia D
2008-08-01
This paper is a report of an integrative review of research investigating computer assisted learning for clinical skills education in nursing, the ways in which it has been studied and the general findings. Clinical skills are an essential aspect of nursing practice and there is international debate about the most effective ways in which these can be taught. Computer assisted learning has been used as an alternative to conventional teaching methods, and robust research to evaluate its effectiveness is essential. The CINAHL, Medline, BNI, PsycInfo and ERIC electronic databases were searched for the period 1997-2006 for research-based papers published in English. Electronic citation tracking and hand searching of reference lists and relevant journals was also undertaken. Twelve studies met the inclusion criteria. An integrative review was conducted and each paper was explored in relation to: design, aims, sample, outcome measures and findings. Many of the study samples were small and there were weaknesses in designs. There is limited empirical evidence addressing the use of computer assisted learning for clinical skills education in nursing. Computer assisted learning has been used to teach a limited range of clinical skills in a variety of settings. The paucity of evaluative studies indicates the need for more rigorous research to investigate the effect of computer assisted learning for this purpose. Areas that need to be addressed in future studies include: sample size, range of skills, longitudinal follow-up and control of confounding variables.
The New Generation of Information Systems.
ERIC Educational Resources Information Center
Grunwald, Peter
1990-01-01
A new generation of home-use electronic information systems could help transform American schooling. These services reach beyond computer enthusiasts, using various combinations of mass marketing techniques, attractive graphics, easy-to-use controls, localized information, low-cost access, and dedicated terminals. Representative samples include…
VanVleet, Thomas; Voss, Michelle; Dabit, Sawsan; Mitko, Alex; DeGutis, Joseph
2018-05-03
Healthy aging is associated with a decline in multiple functional domains including perception, attention, short and long-term memory, reasoning, decision-making, as well as cognitive and motor control functions; all of which are significantly modulated by an individual's level of alertness. The control of alertness also significantly declines with age and contributes to increased lapses of attention in everyday life, ranging from minor memory slips to a lack of vigilance and increased risk of falls or motor-vehicle accidents. Several experimental behavioral therapies designed to remediate age-related cognitive decline have been developed, but differ widely in content, method and dose. Preliminary studies demonstrate that Tonic and Phasic Alertness Training (TAPAT) can improve executive functions in older adults and may be a useful adjunct treatment to enhance benefits gained in other clinically validated treatments. The purpose of the current trial (referred to as the Attention training for Learning Enhancement and Resilience Trial or ALERT) is to compare TAPAT to an active control training condition, include a larger sample of patients, and assess both cognitive and functional outcomes. We will employ a multi-site, longitudinal, blinded randomized controlled trial (RCT) design with a target sample of 120 patients with age-related cognitive decline. Patients will be asked to complete 36 training sessions remotely (30 min/day, 5 days a week, over 3 months) of either the experimental TAPAT training program or an active control computer games condition. Patients will be assessed on a battery of cognitive and functional outcomes at four time points, including: a) immediately before training, b) halfway through training, c) within forty-eight hours post completion of total training, and d) after a three-month no-contact period post completion of total training, to assess the longevity of potential training effects. The strengths of this protocol are that it tests an innovative, in-home administered treatment that targets a fundamental deficit in adults with age-related cognitive decline; employs highly sensitive computer-based assessments of cognition as well as functional abilities, and incorporates a large sample size in an RCT design. ClinicalTrials.gov identifier: NCT02416401.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Augmentation of thrombin generation in neonates undergoing cardiopulmonary bypass.
Guzzetta, N A; Szlam, F; Kiser, A S; Fernandez, J D; Szlam, A D; Leong, T; Tanaka, K A
2014-02-01
Factor concentrates are currently available and becoming increasingly used off-label for treatment of bleeding. We compared recombinant activated factor VII (rFVIIa) with three-factor prothrombin complex concentrate (3F-PCC) for the ability to augment thrombin generation (TG) in neonatal plasma after cardiopulmonary bypass (CPB). First, we used a computer-simulated coagulation model to assess the impact of rFVIIa and 3F-PCC, and then performed similar measurements ex vivo using plasma from neonates undergoing CPB. Simulated TG was computed according to the coagulation factor levels from umbilical cord plasma and the therapeutic levels of rFVIIa, 3F-PCC, or both. Subsequently, 11 neonates undergoing cardiac surgery were enrolled. Two blood samples were obtained from each neonate: pre-CPB and post-CPB after platelet and cryoprecipitate transfusion. The post-CPB products sample was divided into control (no treatment), control plus rFVIIa (60 nM), and control plus 3F-PCC (0.3 IU ml(-1)) aliquots. Three parameters of TG were measured ex vivo. The computer-simulated post-CPB model demonstrated that rFVIIa failed to substantially improve lag time, TG rate and peak thrombin without supplementing prothrombin. Ex vivo data showed that addition of rFVIIa post-CPB significantly shortened lag time; however, rate and peak were not statistically significantly improved. Conversely, 3F-PCC improved all TG parameters in parallel with increased prothrombin levels in both simulated and ex vivo post-CPB samples. Our data highlight the importance of prothrombin replacement in restoring TG. Despite a low content of FVII, 3F-PCC exerts potent procoagulant activity compared with rFVIIa ex vivo. Further clinical evaluation regarding the efficacy and safety of 3F-PCC is warranted.
Problems experienced by people with arthritis when using a computer.
Baker, Nancy A; Rogers, Joan C; Rubinstein, Elaine N; Allaire, Saralynn H; Wasko, Mary Chester
2009-05-15
To describe the prevalence of computer use problems experienced by a sample of people with arthritis, and to determine differences in the magnitude of these problems among people with rheumatoid arthritis (RA), osteoarthritis (OA), and fibromyalgia (FM). Subjects were recruited from the Arthritis Network Disease Registry and asked to complete a survey, the Computer Problems Survey, which was developed for this study. Descriptive statistics were calculated for the total sample and the 3 diagnostic subgroups. Ordinal regressions were used to determine differences between the diagnostic subgroups with respect to each equipment item while controlling for confounding demographic variables. A total of 359 respondents completed a survey. Of the 315 respondents who reported using a computer, 84% reported a problem with computer use attributed to their underlying disorder, and approximately 77% reported some discomfort related to computer use. Equipment items most likely to account for problems and discomfort were the chair, keyboard, mouse, and monitor. Of the 3 subgroups, significantly more respondents with FM reported more severe discomfort, more problems, and greater limitations related to computer use than those with RA or OA for all 4 equipment items. Computer use is significantly affected by arthritis. This could limit the ability of a person with arthritis to participate in work and home activities. Further study is warranted to delineate disease-related limitations and develop interventions to reduce them.
Hughes, J Antony; Phillips, Gordon; Reed, Phil
2013-01-01
Basic literacy skills underlie much future adult functioning, and are targeted in children through a variety of means. Children with reading problems were exposed either to a self-paced computer programme that focused on improving phonetic ability, or underwent a classroom-based reading intervention. Exposure was limited to 3 40-min sessions a week, for six weeks. The children were assessed in terms of their reading, spelling, and mathematics abilities, as well as for their externalising and internalising behaviour problems, before the programme commenced, and immediately after the programme terminated. Relative to the control group, the computer-programme improved reading by about seven months in boys (but not in girls), but had no impact on either spelling or mathematics. Children on the programme also demonstrated fewer externalising and internalising behaviour problems than the control group. The results suggest that brief exposure to a self-paced phonetic computer-teaching programme had some benefits for the sample.
A method of calculating the performance of controllable propellers with sample computations
NASA Technical Reports Server (NTRS)
Hartman, Edwin P
1934-01-01
This paper contains a series of calculations showing how the performance of controllable propellers may be derived from data on fixed-pitch propellers given in N.A.C.A. Technical Report No. 350, or from similar data. Sample calculations are given which compare the performance of airplanes with fixed-pitch and with controllable propellers. The gain in performance with controllable propellers is shown to be largely due to the increased power available, rather than to an increase in efficiency. Controllable propellers are of particular advantage when used with geared and with supercharged engines. A controllable propeller reduces the take-off run, increases the rate of climb and the ceiling, but does not increase the high speed, except when operating above the design altitude of the previously used fixed-pitch propeller or when that propeller was designed for other than high speed.
Environment and health: Probes and sensors for environment digital control
NASA Astrophysics Data System (ADS)
Schettini, Chiara
2014-05-01
The idea of studying the environment using New Technologies (NT) came from a MIUR (Ministry of Education of the Italian Government) notice that allocated funds for the realization of innovative school science projects. The "Environment and Health" project uses probes and sensors for digital control of environment (water, air and soil). The working group was composed of 4 Science teachers from 'Liceo Statale G. Mazzini ', under the coordination of teacher Chiara Schettini. The Didactic Section of Naples City of Sciences helped the teachers in developing the project and it organized a refresher course for them on the utilization of digital control sensors. The project connects Environment and Technology because the study of the natural aspects and the analysis of the chemical-physical parameters give students and teachers skills for studying the environment based on the utilization of NT in computing data elaboration. During the practical project, samples of air, water and soil are gathered in different contexts. Sample analysis was done in the school's scientific laboratory with digitally controlled sensors. The data are elaborated with specific software and the results have been written in a booklet and in a computing database. During the first year, the project involved 6 school classes (age of the students 14—15 years), under the coordination of Science teachers. The project aims are: 1) making students more aware about environmental matters 2) achieving basic skills for evaluating air, water and soil quality. 3) achieving strong skills for the utilization of digitally controlled sensors. 4) achieving computing skills for elaborating and presenting data. The project aims to develop a large environmental conscience and the need of a ' good ' environment for defending our health. Moreover it would increase the importance of NT as an instrument of knowledge.
NASA Astrophysics Data System (ADS)
Tong, Kai; Fan, Shiming; Gong, Derong; Lu, Zuming; Liu, Jian
The synchronizer/data buffer (SDB) in the command and data acquisition station for China's future Geostationary Meteorological Satellite is described. Several computers and special microprocessors are used in tandem with minimized hardware to fulfill all of the functions. The high-accuracy digital phase locked loop is operated by computer and by controlling the count value of the 20-MHz clock to acquire and track such signals as sun pulse, scan synchronization detection pulse, and earth pulse. Sun pulse and VISSR data are recorded precisely and economically by digitizing the time relation. The VISSR scan timing and equiangular control timing, and equal time sampling on satellite are also discussed.
Computer-Aided Diagnostic System For Mass Survey Chest Images
NASA Astrophysics Data System (ADS)
Yasuda, Yoshizumi; Kinoshita, Yasuhiro; Emori, Yasufumi; Yoshimura, Hitoshi
1988-06-01
In order to support screening of chest radiographs on mass survey, a computer-aided diagnostic system that automatically detects abnormality of candidate images using a digital image analysis technique has been developed. Extracting boundary lines of lung fields and examining their shapes allowed various kind of abnormalities to be detected. Correction and expansion were facilitated by describing the system control, image analysis control and judgement of abnormality in the rule type programing language. In the experiments using typical samples of student's radiograms, good results were obtained for the detection of abnormal shape of lung field, cardiac hypertrophy and scoliosis. As for the detection of diaphragmatic abnormality, relatively good results were obtained but further improvements will be necessary.
A computer system for analysis and transmission of spirometry waveforms using volume sampling.
Ostler, D V; Gardner, R M; Crapo, R O
1984-06-01
A microprocessor-controlled data gathering system for telemetry and analysis of spirometry waveforms was implemented using a completely digital design. Spirometry waveforms were obtained from an optical shaft encoder attached to a rolling seal spirometer. Time intervals between 10-ml volume changes (volume sampling) were stored. The digital design eliminated problems of analog signal sampling. The system measured flows up to 12 liters/sec with 5% accuracy and volumes up to 10 liters with 1% accuracy. Transmission of 10 waveforms took about 3 min. Error detection assured that no data were lost or distorted during transmission. A pulmonary physician at the central hospital reviewed the volume-time and flow-volume waveforms and interpretations generated by the central computer before forwarding the results and consulting with the rural physician. This system is suitable for use in a major hospital, rural hospital, or small clinic because of the system's simplicity and small size.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision
Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana
2014-01-01
We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1–50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933
Integrated multiplexed capillary electrophoresis system
Yeung, Edward S.; Tan, Hongdong
2002-05-14
The present invention provides an integrated multiplexed capillary electrophoresis system for the analysis of sample analytes. The system integrates and automates multiple components, such as chromatographic columns and separation capillaries, and further provides a detector for the detection of analytes eluting from the separation capillaries. The system employs multiplexed freeze/thaw valves to manage fluid flow and sample movement. The system is computer controlled and is capable of processing samples through reaction, purification, denaturation, pre-concentration, injection, separation and detection in parallel fashion. Methods employing the system of the invention are also provided.
Development of a remote control console for the HHIRF 25-MV tandem accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasanul Basher, A.M.
1991-09-01
The CAMAC-based control system for the 25-MV Tandem Accelerator at HHIRF uses two Perkin-Elmer, 32-bit minicomputers: a message-switching computer and a supervisory computer. Two operator consoles are located on one of the six serial highways. Operator control is provided by means of a console CRT, trackball, assignable shaft encoders and meters. The message-switching computer transmits and receives control information on the serial highways. At present, the CRT pages with updated parameters can be displayed and parameters can be controlled only from the two existing consoles, one in the Tandem control room and the other in the ORIC control room. Itmore » has become necessary to expand the control capability to several other locations in the building. With the expansion of control and monitoring capability of accelerator parameters to other locations, the operators will be able to control and observe the result of the control action at the same time. Since the new control console will be PC-based, the existing page format will be changed. The PC will be communicating with the Perkin-Elmer through RS-232 and a communication software package. Hardware configuration has been established, a communication software program that reads the pages from the shared memory has been developed. In this paper, we present the implementation strategy, works completed, existing and new page format, future action plans, explanation of pages and use of related global variables, a sample session, and flowcharts.« less
Remote control missile model test
NASA Technical Reports Server (NTRS)
Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.
1989-01-01
An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.
Innovative flow controller for time integrated passive sampling using SUMMA canisters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, P.; Farant, J.P.; Cole, H.
1996-12-31
To restrict the entry of gaseous contaminants inside evacuated vessels such as SUMMA canisters, mechanical flow controllers are used to collect integrated atmospheric samples. From the passive force generated by the pressure gradient, the motion of gas can be controlled to obtain a constant flow rate. Presently, devices based on the principle of critical orifices are used and they are all limited to an upper integrated sampling time. A novel flow controller which can be designed to achieve any desired sampling time when used on evacuated vessels was recently developed. It can extend the sampling time for hours, days, weeksmore » or even months for the benefits of environmental, engineering and toxicological professionals. The design of the controller is obtained from computer simulations done with an original set of equations derived from fluid mechanic and gas kinetic laws. To date, the experimental results have shown excellent agreement, with predictions obtained from the mathematical model. This new controller has already found numerous applications. Units able to deliver a constant sampling rate between vacuum and approximately -10 inches Hg during continuous long term duration have been used with SUMMA canisters of different volumes (500 ml, 1 litre and 61). Essentially, any combination of sampling time and sampler volume is possible. The innovative flow controller has contributed to an air quality assessment around a sanitary landfill (indoor/outdoor), and inside domestic wastewater and pulpmill sludge treatment facilities. It is presently being used as an alternative methodology for atmospheric sampling in the Russian orbital station Mir. This device affords true long term passive monitoring of selected gaseous air pollutants for environmental studies. 14 refs., 3 figs.« less
Enhanced conformational sampling of carbohydrates by Hamiltonian replica-exchange simulation.
Mishra, Sushil Kumar; Kara, Mahmut; Zacharias, Martin; Koca, Jaroslav
2014-01-01
Knowledge of the structure and conformational flexibility of carbohydrates in an aqueous solvent is important to improving our understanding of how carbohydrates function in biological systems. In this study, we extend a variant of the Hamiltonian replica-exchange molecular dynamics (MD) simulation to improve the conformational sampling of saccharides in an explicit solvent. During the simulations, a biasing potential along the glycosidic-dihedral linkage between the saccharide monomer units in an oligomer is applied at various levels along the replica runs to enable effective transitions between various conformations. One reference replica runs under the control of the original force field. The method was tested on disaccharide structures and further validated on biologically relevant blood group B, Lewis X and Lewis A trisaccharides. The biasing potential-based replica-exchange molecular dynamics (BP-REMD) method provided a significantly improved sampling of relevant conformational states compared with standard continuous MD simulations, with modest computational costs. Thus, the proposed BP-REMD approach adds a new dimension to existing carbohydrate conformational sampling approaches by enhancing conformational sampling in the presence of solvent molecules explicitly at relatively low computational cost.
Recent interest in monitoring and speciation of particulate matter has led to increased application of scanning electron microscopy (SEM) coupled with energy-dispersive x-ray analysis (EDX) to individual particle analysis. SEM/EDX provides information on the size, shape, co...
NASA Technical Reports Server (NTRS)
Hepner, T. E.; Meyers, J. F. (Inventor)
1985-01-01
A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.
Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling
NASA Astrophysics Data System (ADS)
Awasthi, Shalini; Nair, Nisanth N.
2017-03-01
Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
Sampling and Visualizing Creases with Scale-Space Particles
Kindlmann, Gordon L.; Estépar, Raúl San José; Smith, Stephen M.; Westin, Carl-Fredrik
2010-01-01
Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. PMID:19834216
Analysis of explicit model predictive control for path-following control
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080
Analysis of explicit model predictive control for path-following control.
Lee, Junho; Chang, Hyuk-Jun
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.
Lucey, K.J.
1990-01-01
The U.S. Geological Survey conducts an external blind sample quality assurance project for its National Water Quality Laboratory in Denver, Colorado, based on the analysis of reference water samples. Reference samples containing selected inorganic and nutrient constituents are disguised as environmental samples at the Survey 's office in Ocala, Florida, and are sent periodically through other Survey offices to the laboratory. The results of this blind sample project indicate the quality of analytical data produced by the laboratory. This report provides instructions on the use of QADATA, an interactive, menu-driven program that allows users to retrieve the results of the blind sample quality- assurance project. The QADATA program, which is available on the U.S. Geological Survey 's national computer network, accesses a blind sample data base that contains more than 50,000 determinations from the last five water years for approximately 40 constituents at various concentrations. The data can be retrieved from the database for any user- defined time period and for any or all available constituents. After the user defines the retrieval, the program prepares statistical tables, control charts, and precision plots and generates a report which can be transferred to the user 's office through the computer network. A discussion of the interpretation of the program output is also included. This quality assurance information will permit users to document the quality of the analytical results received from the laboratory. The blind sample data is entered into the database within weeks after being produced by the laboratory and can be retrieved to meet the needs of specific projects or programs. (USGS)
Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis
Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana
2012-01-01
Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153
The particle size distributions, morphologies, and chemical composition distributions of 14 coal fly ash (CFA) samples produced by the combustion of four western U.S. coals (two subbituminous, one lignite, and one bituminous) and three eastern U.S. coals (all bituminous) have bee...
SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.
An automated atmospheric sampling system operating on 747 airliners
NASA Technical Reports Server (NTRS)
Perkins, P. J.; Gustafsson, U. R. C.
1976-01-01
An air sampling system that automatically measures the temporal and spatial distribution of particulate and gaseous constituents of the atmosphere is collecting data on commercial air routes covering the world. Measurements are made in the upper troposphere and lower stratosphere (6 to 12 km) of constituents related to aircraft engine emissions and other pollutants. Aircraft operated by different airlines sample air at latitudes from the Arctic to Australia. This unique system includes specialized instrumentation, a special air inlet probe for sampling outside air, a computerized automatic control, and a data acquisition system. Air constituent and related flight data are tape recorded in flight for later computer processing on the ground.
Risk Factors for Addiction and Their Association with Model-Based Behavioral Control.
Reiter, Andrea M F; Deserno, Lorenz; Wilbertz, Tilmann; Heinze, Hans-Jochen; Schlagenhauf, Florian
2016-01-01
Addiction shows familial aggregation and previous endophenotype research suggests that healthy relatives of addicted individuals share altered behavioral and cognitive characteristics with individuals suffering from addiction. In this study we asked whether impairments in behavioral control proposed for addiction, namely a shift from goal-directed, model-based toward habitual, model-free control, extends toward an unaffected sample (n = 20) of adult children of alcohol-dependent fathers as compared to a sample without any personal or family history of alcohol addiction (n = 17). Using a sequential decision-making task designed to investigate model-free and model-based control combined with a computational modeling analysis, we did not find any evidence for altered behavioral control in individuals with a positive family history of alcohol addiction. Independent of family history of alcohol dependence, we however observed that the interaction of two different risk factors of addiction, namely impulsivity and cognitive capacities, predicts the balance of model-free and model-based behavioral control. Post-hoc tests showed a positive association of model-based behavior with cognitive capacity in the lower, but not in the higher impulsive group of the original sample. In an independent sample of particularly high- vs. low-impulsive individuals, we confirmed the interaction effect of cognitive capacities and high vs. low impulsivity on model-based control. In the confirmation sample, a positive association of omega with cognitive capacity was observed in highly impulsive individuals, but not in low impulsive individuals. Due to the moderate sample size of the study, further investigation of the association of risk factors for addiction with model-based behavior in larger sample sizes is warranted.
Digital redesign of the control system for the Robotics Research Corporation model K-1607 robot
NASA Technical Reports Server (NTRS)
Carroll, Robert L.
1989-01-01
The analog control system for positioning each link of the Robotics Research Corporation Model K-1607 robot manipulator was redesigned for computer control. In order to accomplish the redesign, a linearized model of the dynamic behavior of the robot was developed. The parameters of the model were determined by examination of the input-output data collected in closed-loop operation of the analog control system. The robot manipulator possesses seven degrees of freedom in its motion. The analog control system installed by the manufacturer of the robot attempts to control the positioning of each link without feedback from other links. Constraints on the design of a digital control system include: the robot cannot be disassembled for measurement of parameters; the digital control system must not include filtering operations if possible, because of lack of computer capability; and criteria of goodness of control system performing is lacking. The resulting design employs sampled-data position and velocity feedback. The criteria of the design permits the control system gain margin and phase margin, measured at the same frequencies, to be the same as that provided by the analog control system.
GPUbased, Microsecond Latency, HectoChannel MIMO Feedback Control of Magnetically Confined Plasmas
NASA Astrophysics Data System (ADS)
Rath, Nikolaus
Feedback control has become a crucial tool in the research on magnetic confinement of plasmas for achieving controlled nuclear fusion. This thesis presents a novel plasma feedback control system that, for the first time, employs a Graphics Processing Unit (GPU) for microsecond-latency, real-time control computations. This novel application area for GPU computing is opened up by a new system architecture that is optimized for low-latency computations on less than kilobyte sized data samples as they occur in typical plasma control algorithms. In contrast to traditional GPU computing approaches that target complex, high-throughput computations with massive amounts of data, the architecture presented in this thesis uses the GPU as the primary processing unit rather than as an auxiliary of the CPU, and data is transferred from A-D/D-A converters directly into GPU memory using peer-to-peer PCI Express transfers. The described design has been implemented in a new, GPU-based control system for the High-Beta Tokamak - Extended Pulse (HBT-EP) device. The system is built from commodity hardware and uses an NVIDIA GeForce GPU and D-TACQ A-D/D-A converters providing a total of 96 input and 64 output channels. The system is able to run with sampling periods down to 4 μs and latencies down to 8 μs. The GPU provides a total processing power of 1.5 x 1012 floating point operations per second. To illustrate the performance and versatility of both the general architecture and concrete implementation, a new control algorithm has been developed. The algorithm is designed for the control of multiple rotating magnetic perturbations in situations where the plasma equilibrium is not known exactly and features an adaptive system model: instead of requiring the rotation frequencies and growth rates embedded in the system model to be set a priori, the adaptive algorithm derives these parameters from the evolution of the perturbation amplitudes themselves. This results in non-linear control computations with high computational demands, but is handled easily by the GPU based system. Both digital processing latency and an arbitrary multi-pole response of amplifiers and control coils is fully taken into account for the generation of control signals. To separate sensor signals into perturbed and equilibrium components without knowledge of the equilibrium fields, a new separation method based on biorthogonal decomposition is introduced and used to derive a filter that performs the separation in real-time. The control algorithm has been implemented and tested on the new, GPU-based feedback control system of the HBT-EP tokamak. In this instance, the algorithm was set up to control four rotating n = 1 perturbations at different poloidal angles. The perturbations were treated as coupled in frequency but independent in amplitude and phase, so that the system effectively controls a helical n = 1 perturbation with unknown poloidal spectrum. Depending on the plasma's edge safety factor and rotation frequency, the control system is shown to be able to suppress the amplitude of the dominant 8 kHz mode by up to 60% or amplify the saturated amplitude by a factor of up to two. Intermediate feedback phases combine suppression and amplification with a speed up or slow down of the mode rotation frequency. Increasing feedback gain results in the excitation of an additional, slowly rotating 1.4 kHz mode without further effects on the 8 kHz mode. The feedback performance is found to exceed previous results obtained with an FPGA- and Kalman-filter based control system without requiring any tuning of system model parameters. Experimental results are compared with simulations based on a combination of the Boozer surface current model and the Fitzpatrick-Aydemir model. Within the subset of phenomena that can be represented by the model as well as determined experimentally, qualitative agreement is found.
Hamilton, Craig S; Kruse, Regina; Sansoni, Linda; Barkhofen, Sonja; Silberhorn, Christine; Jex, Igor
2017-10-27
Boson sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require universal control over the quantum system, which favors current photonic experimental platforms. Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the Hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson sampling, a #P hard problem, using squeezed states. This demonstrates that Boson sampling from Gaussian states is possible, with significant advantages in the photon generation probability, compared to existing protocols.
Hall, Nathan C.; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia
2018-01-01
As technology becomes increasingly integrated with education, research on the relationships between students’ computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner’s (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study. PMID:29529039
Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia
2018-01-01
As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.
Nicholson, Daren T; Chalk, Colin; Funnell, W Robert J; Daniel, Sam J
2006-11-01
The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.
Multi-school collaboration to develop and test nutrition computer modules for pediatric residents.
Roche, Patricia L; Ciccarelli, Mary R; Gupta, Sandeep K; Hayes, Barbara M; Molleston, Jean P
2007-09-01
The provision of essential nutrition-related content in US medical education has been deficient, despite efforts of the federal government and multiple professional organizations. Novel and efficient approaches are needed. A multi-department project was developed to create and pilot a computer-based compact disc instructional program covering the nutrition topics of oral rehydration therapy, calcium, and vitamins. Funded by an internal medical school grant, the content of the modules was written by Department of Pediatrics faculty. The modules were built by School of Informatics faculty and students, and were tested on a convenience sampling of 38 pediatric residents in a randomized controlled trial performed by a registered dietitian/School of Health and Rehabilitation Sciences Master's degree candidate. The modules were reviewed for content by the pediatric faculty principal investigator and the registered dietitian/School of Health and Rehabilitation Sciences graduate student. Residents completed a pretest of nutrition knowledge and attitude toward nutrition and Web-based instruction. Half the group was given three programs (oral rehydration therapy, calcium, and vitamins) on compact disc for study over 6 weeks. Both study and control groups completed a posttest. Pre- and postintervention objective test results in study vs control groups and attitudinal survey results before and after intervention in the study group were compared. The experimental group demonstrated significantly better posttrial objective test performance compared to the control group (P=0.0005). The study group tended toward improvement, whereas the control group performance declined substantially between pre- and posttests. Study group resident attitudes toward computer-based instruction improved. Use of these computer modules prompted almost half of the residents in the study group to independently pursue relevant nutrition-related information. This inexpensive, collaborative, multi-department effort to design a computer-based nutrition curriculum positively impacted both resident knowledge and attitudes.
A Moment of Mindfulness: Computer-Mediated Mindfulness Practice Increases State Mindfulness.
Mahmood, Lynsey; Hopthrow, Tim; Randsley de Moura, Georgina
2016-01-01
Three studies investigated the use of a 5-minute, computer-mediated mindfulness practice in increasing levels of state mindfulness. In Study 1, 54 high school students completed the computer-mediated mindfulness practice in a lab setting and Toronto Mindfulness Scale (TMS) scores were measured before and after the practice. In Study 2 (N = 90) and Study 3 (N = 61), the mindfulness practice was tested with an entirely online sample to test the delivery of the 5-minute mindfulness practice via the internet. In Study 2 and 3, we found a significant increase in TMS scores in the mindful condition, but not in the control condition. These findings highlight the impact of a brief, mindfulness practice for single-session, computer-mediated use to increase mindfulness as a state.
A Moment of Mindfulness: Computer-Mediated Mindfulness Practice Increases State Mindfulness
Mahmood, Lynsey; Hopthrow, Tim; Randsley de Moura, Georgina
2016-01-01
Three studies investigated the use of a 5-minute, computer-mediated mindfulness practice in increasing levels of state mindfulness. In Study 1, 54 high school students completed the computer-mediated mindfulness practice in a lab setting and Toronto Mindfulness Scale (TMS) scores were measured before and after the practice. In Study 2 (N = 90) and Study 3 (N = 61), the mindfulness practice was tested with an entirely online sample to test the delivery of the 5-minute mindfulness practice via the internet. In Study 2 and 3, we found a significant increase in TMS scores in the mindful condition, but not in the control condition. These findings highlight the impact of a brief, mindfulness practice for single-session, computer-mediated use to increase mindfulness as a state. PMID:27105428
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
Acoustic Detection Of Loose Particles In Pressure Sensors
NASA Technical Reports Server (NTRS)
Kwok, Lloyd C.
1995-01-01
Particle-impact-noise-detector (PIND) apparatus used in conjunction with computer program analyzing output of apparatus to detect extraneous particles trapped in pressure sensors. PIND tester essentially shaker equipped with microphone measuring noise in pressure sensor or other object being shaken. Shaker applies controlled vibration. Output of microphone recorded and expressed in terms of voltage, yielding history of noise subsequently processed by computer program. Data taken at sampling rate sufficiently high to enable identification of all impacts of particles on sensor diaphragm and on inner surfaces of sensor cavities.
User's Manual for Computer Program ROTOR. [to calculate tilt-rotor aircraft dynamic characteristics
NASA Technical Reports Server (NTRS)
Yasue, M.
1974-01-01
A detailed description of a computer program to calculate tilt-rotor aircraft dynamic characteristics is presented. This program consists of two parts: (1) the natural frequencies and corresponding mode shapes of the rotor blade and wing are developed from structural data (mass distribution and stiffness distribution); and (2) the frequency response (to gust and blade pitch control inputs) and eigenvalues of the tilt-rotor dynamic system, based on the natural frequencies and mode shapes, are derived. Sample problems are included to assist the user.
Regev, Sivan; Hadas-Lidor, Noami; Rosenberg, Limor
2016-08-01
In this study, the assessment tool "Internet and Computer User Profile" questionnaire (ICUP) is presented and validated. It was developed in order to gather information for setting intervention goals to meet current demands. Sixty-eight subjects aged 23-68 participated in the study. The study group (n = 28) was sampled from two vocational centers. The control group consisted of 40 participants from the general population that were sampled by convenience sampling based on the demographics of the study group. Subjects from both groups answered the ICUP questionnaire. Subjects of the study group answered the General Self- Efficacy (GSE) questionnaire and performed the Assessment of Computer Task Performance (ACTP) test in order to examine the convergent validity of the ICUP. Twenty subjects from both groups retook the ICUP questionnaire in order to obtain test-retest results. Differences between groups were tested using multiple analysis of variance (MANOVA) tests. Pearson and Spearman's tests were used for calculating correlations. Cronbach's alpha coefficient and k equivalent were used to assess internal consistency. The results indicate that the questionnaire is valid and reliable. They emphasize that the layout of the ICUP items facilitates in making a comprehensive examination of the client's perception regarding his participation in computer and internet activities. Implications for Rehabiliation The assessment tool "Internet and Computer User Profile" (ICUP) questionnaire is a novel assessment tool that evaluates operative use and individual perception of computer activities. The questionnaire is valid and reliable for use with participants of vocational centers dealing with mental illness. It is essential to facilitate access to computers for people with mental illnesses, seeing that they express similar interest in computers and internet as people from the general population of the same age. Early intervention will be particularly effective for young adults dealing with mental illness, since the digital gap between them and young people in general is relatively small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
A Long Range Science Rover For Future Mars Missions
NASA Technical Reports Server (NTRS)
Hayati, Samad
1997-01-01
This paper describes the design and implementation currently underway at the Jet Propulsion Laboratory of a long range science rover for future missions to Mars. The small rover prototype, called Rocky 7, is capable of long traverse. autonomous navigation. and science instrument control, carries three science instruments, and can be commanded from any computer platform and any location using the World Wide Web. In this paper we describe the mobility system, the sampling system, the sensor suite, navigation and control, onboard science instruments. and the ground command and control system.
Manual control models of industrial management
NASA Technical Reports Server (NTRS)
Crossman, E. R. F. W.
1972-01-01
The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.
Flexible structure control laboratory development and technology demonstration
NASA Technical Reports Server (NTRS)
Vivian, H. C.; Blaire, P. E.; Eldred, D. B.; Fleischer, G. E.; Ih, C.-H. C.; Nerheim, N. M.; Scheid, R. E.; Wen, J. T.
1987-01-01
An experimental structure is described which was constructed to demonstrate and validate recent emerging technologies in the active control and identification of large flexible space structures. The configuration consists of a large, 20 foot diameter antenna-like flexible structure in the horizontal plane with a gimballed central hub, a flexible feed-boom assembly hanging from the hub, and 12 flexible ribs radiating outward. Fourteen electrodynamic force actuators mounted to the hub and to the individual ribs provide the means to excite the structure and exert control forces. Thirty permanently mounted sensors, including optical encoders and analog induction devices provide measurements of structural response at widely distributed points. An experimental remote optical sensor provides sixteen additional sensing channels. A computer samples the sensors, computes the control updates and sends commands to the actuators in real time, while simultaneously displaying selected outputs on a graphics terminal and saving them in memory. Several control experiments were conducted thus far and are documented. These include implementation of distributed parameter system control, model reference adaptive control, and static shape control. These experiments have demonstrated the successful implementation of state-of-the-art control approaches using actual hardware.
Input-output oriented computation algorithms for the control of large flexible structures
NASA Technical Reports Server (NTRS)
Minto, K. D.
1989-01-01
An overview is given of work in progress aimed at developing computational algorithms addressing two important aspects in the control of large flexible space structures; namely, the selection and placement of sensors and actuators, and the resulting multivariable control law design problem. The issue of sensor/actuator set selection is particularly crucial to obtaining a satisfactory control design, as clearly a poor choice will inherently limit the degree to which good control can be achieved. With regard to control law design, the researchers are driven by concerns stemming from the practical issues associated with eventual implementation of multivariable control laws, such as reliability, limit protection, multimode operation, sampling rate selection, processor throughput, etc. Naturally, the burden imposed by dealing with these aspects of the problem can be reduced by ensuring that the complexity of the compensator is minimized. Our approach to these problems is based on extensions to input/output oriented techniques that have proven useful in the design of multivariable control systems for aircraft engines. In particular, researchers are exploring the use of relative gain analysis and the condition number as a means of quantifying the process of sensor/actuator selection and placement for shape control of a large space platform.
NASA Astrophysics Data System (ADS)
Arnold, F.; DeMallie, I.; Florence, L.; Kashinski, D. O.
2015-03-01
This manuscript addresses the design, hardware details, construction, and programming of an apparatus allowing an experimenter to monitor and record high-temperature thermocouple measurements of dynamic systems in real time. The apparatus uses wireless network technology to bridge the gap between a dynamic (moving) sample frame and the static laboratory frame. Our design is a custom solution applied to samples that rotate through large angular displacements where hard-wired and typical slip-ring solutions are not practical because of noise considerations. The apparatus consists of a Raspberry PI mini-Linux computer, an Arduino micro-controller, an Ocean Controls thermocouple multiplexer shield, and k-type thermocouples.
Arnold, F; DeMallie, I; Florence, L; Kashinski, D O
2015-03-01
This manuscript addresses the design, hardware details, construction, and programming of an apparatus allowing an experimenter to monitor and record high-temperature thermocouple measurements of dynamic systems in real time. The apparatus uses wireless network technology to bridge the gap between a dynamic (moving) sample frame and the static laboratory frame. Our design is a custom solution applied to samples that rotate through large angular displacements where hard-wired and typical slip-ring solutions are not practical because of noise considerations. The apparatus consists of a Raspberry PI mini-Linux computer, an Arduino micro-controller, an Ocean Controls thermocouple multiplexer shield, and k-type thermocouples.
Direct adaptive control of a PUMA 560 industrial robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1989-01-01
The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.
Research in Multirate Estimation and Control--Optimal Sample Rate Selection.
1981-10-08
scale integration (VLSI), expanded computational capabilities are inevitably consumed in implementation. In aircraft applications, for example, new and...I chosen for the present work is given below: H =J + XC (2.3-1) - A W kfc T + A ; Ni (2.3-2) 2 is kos ktheI The Hamiltonian is constructed by
Mars oxygen production system design
NASA Technical Reports Server (NTRS)
Cotton, Charles E.; Pillow, Linda K.; Perkinson, Robert C.; Brownlie, R. P.; Chwalowski, P.; Carmona, M. F.; Coopersmith, J. P.; Goff, J. C.; Harvey, L. L.; Kovacs, L. A.
1989-01-01
The design and construction phase is summarized of the Mars oxygen demonstration project. The basic hardware required to produce oxygen from simulated Mars atmosphere was assembled and tested. Some design problems still remain with the sample collection and storage system. In addition, design and development of computer compatible data acquisition and control instrumentation is ongoing.
ERIC Educational Resources Information Center
Elgie, Robert; Sapien, Robert; Fullerton, Lynne; Moore, Brian
2010-01-01
The objective of this study was to evaluate the effectiveness of a computer-assisted emergency preparedness course for school nurses. Participants from a convenience sample (52) of school nurses from New Mexico were randomly assigned to intervention or control groups in an experimental after-only posttest design. Intervention group participants…
An Overview of Public Access Computer Software Management Tools for Libraries
ERIC Educational Resources Information Center
Wayne, Richard
2004-01-01
An IT decision maker gives an overview of public access PC software that's useful in controlling session length and scheduling, Internet access, print output, security, and the latest headaches: spyware and adware. In this article, the author describes a representative sample of software tools in several important categories such as setup…
Mars oxygen production system design
NASA Technical Reports Server (NTRS)
1988-01-01
This report summarizes the design and construction of the Mars oxygen demonstration project. The basic hardware required to produce oxygen from simulated Mars atmosphere has been assembled and tested. Some design problems still remain with the sample collection and storage system. In addition, design and development of computer data acquisition and control instrumentation is continuing.
Transformation of Personal Computers and Mobile Phones into Genetic Diagnostic Systems
2014-08-31
a Gel Logic System using UV transillumination and a 535 nm optical filter ( Kodak ). The positive control PCR was performed by taking an aliquot of the...described in the section above. Samples were excited by a UV transilluminator ( Kodak ). For imaging, a 520 ± 10 nm bandpass filter (Edmund Optics) was
The Czech Hydrometeorological Institute (CHMI) in collaboration with the U.S. Environmental Protection Agency conducted a multi-pollutant source apportionment study in 2012 to quantify the impact of regional as well as local sources on air quality in the Ostrava metropolitan area...
ERIC Educational Resources Information Center
Hardre, Patricia L.; Crowson, H. Michael; Xie, Kui; Ly, Cong
2007-01-01
Translation of questionnaire instruments to digital administration systems, both self-contained and web-based, is widespread and increasing daily. However, the literature is lean on controlled empirical studies investigating the potential for differential effects of administrative methods. In this study, two university student samples were…
The Network of Global Corporate Control
Vitali, Stefania; Glattfelder, James B.; Battiston, Stefano
2011-01-01
The structure of the control network of transnational corporations affects global market competition and financial stability. So far, only small national samples were studied and there was no appropriate methodology to assess control globally. We present the first investigation of the architecture of the international ownership network, along with the computation of the control held by each global player. We find that transnational corporations form a giant bow-tie structure and that a large portion of control flows to a small tightly-knit core of financial institutions. This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers. PMID:22046252
NASA Astrophysics Data System (ADS)
Johansen, T. H.; Feder, J.; Jøssang, T.
1986-06-01
A fully automated apparatus has been designed for measurements of dilatation in solid samples under well-defined thermal conditions. The oven can be thermally stabilized to better than 0.1 mK over a temperature range of -60 to 150 °C using a two-stage control strategy. Coarse control is obtained by heat exchange with a circulating thermal fluid, whereas the fine regulation is based on a solid-state heat pump—a Peltier element, acting as heating and cooling source. The bidirectional action of the Peltier element permits the sample block to be controlled at the average temperature of the surroundings, thus making an essentially adiabatic system with a minimum of thermal gradients in the sample block. The dilatometer cell integrated in the oven assembly is of the parallel plate air capacitor type, and the apparatus has been successfully used with a sensitivity of 0.07 Å. Our system is well suited for measurements near structural phase transitions with a relative resolution of Δt=(T-Tc)/Tc=2×10-7 in temperature and ΔL/L=1×10-9 in strain.
NASA Astrophysics Data System (ADS)
Eardley, Julie Anne
The purpose of this study was to determine the effect of different instructional media (computer assisted instruction (CAI) tutorial vs. traditional textbook) on student attitudes toward science and computers and achievement scores in a team-taught integrated science course, ENS 1001, "The Whole Earth Course," which was offered at Florida Institute of Technology during the Fall 2000 term. The effect of gender on student attitudes toward science and computers and achievement scores was also investigated. This study employed a randomized pretest-posttest control group experimental research design with a sample of 30 students (12 males and 18 females). Students had registered for weekly lab sessions that accompanied the course and had been randomly assigned to the treatment or control group. The treatment group used a CAI tutorial for completing homework assignments and the control group used the required textbook for completing homework assignments. The Attitude toward Science and Computers Questionnaire and Achievement Test were the two instruments administered during this study to measure students' attitudes and achievement score changes. A multivariate analysis of covariance (MANCOVA), using hierarchical multiple regression/correlation (MRC), was employed to determine: (1) treatment versus control group attitude and achievement differences; and (2) male versus female attitude and achievement differences. The differences between the treatment group's and control group's homework averages were determined by t test analyses. The overall MANCOVA model was found to be significant at p < .05. Examining research factor set independent variables separately resulted in gender being the only variable that significantly contributed in explaining the variability in a dependent variable, attitudes toward science and computers. T test analyses of the homework averages showed no significant differences. Contradictory to the findings of this study, anecdotal information from personal communication, course evaluations, and homework assignments indicated favorable attitudes and higher achievement scores for a majority of the students in the treatment group.
Computer soundcard as an AC signal generator and oscilloscope for the physics laboratory
NASA Astrophysics Data System (ADS)
Sinlapanuntakul, Jinda; Kijamnajsuk, Puchong; Jetjamnong, Chanthawut; Chotikaprakhan, Sutharat
2018-01-01
The purpose of this paper is to develop both an AC signal generator and a dual-channel oscilloscope based on standard personal computer equipped with sound card as parts of the laboratory of the fundamental physics and the introduction to electronics classes. The setup turns the computer into the two channel measured device which can provides sample rate, simultaneous sampling, frequency range, filters and others essential capabilities required to perform amplitude, phase and frequency measurements of AC signal. The AC signal also generate from the same computer sound card output simultaneously in any waveform such as sine, square, triangle, saw-toothed pulsed, swept sine and white noise etc. These can convert an inexpensive PC sound card into powerful device, which allows the students to measure physical phenomena with their own PCs either at home or at university attendance. A graphic user interface software was developed for control and analysis, including facilities for data recording, signal processing and real time measurement display. The result is expanded utility of self-learning for the students in the field of electronics both AC and DC circuits, including the sound and vibration experiments.
Stuckless, J.S.; VanTrump, G.
1979-01-01
A revised version of Graphic Normative Analysis Program (GNAP) has been developed to allow maximum flexibility in the evaluation of chemical data by the occasional computer user. GNAP calculates ClPW norms, Thornton and Tuttle's differentiation index, Barth's cations, Niggli values and values for variables defined by the user. Calculated values can be displayed graphically in X-Y plots or ternary diagrams. Plotting can be done on a line printer or Calcomp plotter with either weight percent or mole percent data. Modifications in the original program give the user some control over normative calculations for each sample. The number of user-defined variables that can be created from the data has been increased from ten to fifteen. Plotting and calculations can be based on the original data, data adjusted to sum to 100 percent, or data adjusted to sum to 100 percent without water. Analyses for which norms were previously not computable are now computed with footnotes that show excesses or deficiencies in oxides (or volatiles) not accounted for by the norm. This report contains a listing of the computer program, an explanation of the use of the program, and the two sample problems.
Shelton, Ann K; Freeman, Bradley D; Fish, Anne F; Bachman, Jean A; Richardson, Lloyd I
2015-03-01
Many research studies conducted today in critical care have a genomics component. Patients' surrogates asked to authorize participation in genomics research for a loved one in the intensive care unit may not be prepared to make informed decisions about a patient's participation in the research. To examine the effectiveness of a new, computer-based education module on surrogates' understanding of the process of informed consent for genomics research. A pilot study was conducted with visitors in the waiting rooms of 2 intensive care units in a Midwestern tertiary care medical center. Visitors were randomly assigned to the experimental (education module plus a sample genomics consent form; n = 65) or the control (sample genomics consent form only; n = 69) group. Participants later completed a test on informed genomics consent. Understanding the process of informed consent was greater (P = .001) in the experimental group than in the control group. Specifically, compared with the control group, the experimental group had a greater understanding of 8 of 13 elements of informed consent: intended benefits of research (P = .02), definition of surrogate consenter (P= .001), withdrawal from the study (P = .001), explanation of risk (P = .002), purpose of the institutional review board (P = .001), definition of substituted judgment (P = .03), compensation for harm (P = .001), and alternative treatments (P = .004). Computer-based education modules may be an important addition to conventional approaches for obtaining informed consent in the intensive care unit. Preparing patients' family members who may consider serving as surrogate consenters is critical to facilitating genomics research in critical care. ©2015 American Association of Critical-Care Nurses.
On the effects of signal processing on sample entropy for postural control.
Lubetzky, Anat V; Harel, Daphna; Lubetzky, Eyal
2018-01-01
Sample entropy, a measure of time series regularity, has become increasingly popular in postural control research. We are developing a virtual reality assessment of sensory integration for postural control in people with vestibular dysfunction and wished to apply sample entropy as an outcome measure. However, despite the common use of sample entropy to quantify postural sway, we found lack of consistency in the literature regarding center-of-pressure signal manipulations prior to the computation of sample entropy. We therefore wished to investigate the effect of parameters choice and signal processing on participants' sample entropy outcome. For that purpose, we compared center-of-pressure sample entropy data between patients with vestibular dysfunction and age-matched controls. Within our assessment, participants observed virtual reality scenes, while standing on floor or a compliant surface. We then analyzed the effect of: modification of the radius of similarity (r) and the embedding dimension (m); down-sampling or filtering and differencing or detrending. When analyzing the raw center-of-pressure data, we found a significant main effect of surface in medio-lateral and anterior-posterior directions across r's and m's. We also found a significant interaction group × surface in the medio-lateral direction when r was 0.05 or 0.1 with a monotonic increase in p value with increasing r in both m's. These effects were maintained with down-sampling by 2, 3, and 4 and with detrending but not with filtering and differencing. Based on these findings, we suggest that for sample entropy to be compared across postural control studies, there needs to be increased consistency, particularly of signal handling prior to the calculation of sample entropy. Procedures such as filtering, differencing or detrending affect sample entropy values and could artificially alter the time series pattern. Therefore, if such procedures are performed they should be well justified.
A computer program for sample size computations for banding studies
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
1989-01-01
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Higo, Junichi; Ikebe, Jinzen; Kamiya, Narutoshi; Nakamura, Haruki
2012-03-01
Protein folding and protein-ligand docking have long persisted as important subjects in biophysics. Using multicanonical molecular dynamics (McMD) simulations with realistic expressions, i.e., all-atom protein models and an explicit solvent, free-energy landscapes have been computed for several systems, such as the folding of peptides/proteins composed of a few amino acids up to nearly 60 amino-acid residues, protein-ligand interactions, and coupled folding and binding of intrinsically disordered proteins. Recent progress in conformational sampling and its applications to biophysical systems are reviewed in this report, including descriptions of several outstanding studies. In addition, an algorithm and detailed procedures used for multicanonical sampling are presented along with the methodology of adaptive umbrella sampling. Both methods control the simulation so that low-probability regions along a reaction coordinate are sampled frequently. The reaction coordinate is the potential energy for multicanonical sampling and is a structural identifier for adaptive umbrella sampling. One might imagine that this probability control invariably enhances conformational transitions among distinct stable states, but this study examines the enhanced conformational sampling of a simple system and shows that reasonably well-controlled sampling slows the transitions. This slowing is induced by a rapid change of entropy along the reaction coordinate. We then provide a recipe to speed up the sampling by loosening the rapid change of entropy. Finally, we report all-atom McMD simulation results of various biophysical systems in an explicit solvent.
Fazelniya, Zahra; Najafi, Mostafa; Moafi, Alireza; Talakoub, Sedigheh
2017-01-01
Quality of life (QOL) of children with cancer reduces right from the diagnosis of disease and the start of treatment. Computer games in medicine are utilized to interact with patients and to improve their health-related behaviors. This study aimed to investigate the effect of an interactive computer game on the QOL of children undergoing chemotherapy. In this clinical trial, 64 children with cancer aged between 8 and12 years were selected through convenience sampling and randomly assigned to experimental or control group. The experimental group played a computer game for 3 hours a week for 4 consecutive weeks and the control group only received routine care. The data collection tool was the Pediatric Quality of Life Inventory (PedsQL) 3.0 Cancer Module Child self-report designed for children aged between 8 to 12 years. Data were analyzed using descriptive and inferential statistics in SPSS software. Before intervention, there was no significant difference between the two groups in terms of mean total QOL score ( p = 0.87). However, immediately after the intervention ( p = 0.02) and 1 month after the intervention ( p < 0.001), the overall mean QOL score was significantly higher in the intervention group than the control group. Based on the findings, computer games seem to be effective as a tool in influencing health-related behavior and improving the QOL of children undergoing chemotherapy. Therefore, according to the findings of this study, computer games can be used to improve the QOL of children undergoing chemotherapy.
Fazelniya, Zahra; Najafi, Mostafa; Moafi, Alireza; Talakoub, Sedigheh
2017-01-01
Background: Quality of life (QOL) of children with cancer reduces right from the diagnosis of disease and the start of treatment. Computer games in medicine are utilized to interact with patients and to improve their health-related behaviors. This study aimed to investigate the effect of an interactive computer game on the QOL of children undergoing chemotherapy. Materials and Methods: In this clinical trial, 64 children with cancer aged between 8 and12 years were selected through convenience sampling and randomly assigned to experimental or control group. The experimental group played a computer game for 3 hours a week for 4 consecutive weeks and the control group only received routine care. The data collection tool was the Pediatric Quality of Life Inventory (PedsQL) 3.0 Cancer Module Child self-report designed for children aged between 8 to 12 years. Data were analyzed using descriptive and inferential statistics in SPSS software. Results: Before intervention, there was no significant difference between the two groups in terms of mean total QOL score (p = 0.87). However, immediately after the intervention (p = 0.02) and 1 month after the intervention (p < 0.001), the overall mean QOL score was significantly higher in the intervention group than the control group. Conclusions: Based on the findings, computer games seem to be effective as a tool in influencing health-related behavior and improving the QOL of children undergoing chemotherapy. Therefore, according to the findings of this study, computer games can be used to improve the QOL of children undergoing chemotherapy. PMID:29184580
Health Status and Health Dynamics in an Empirical Model of Expected Longevity*
Benítez-Silva, Hugo; Ni, Huan
2010-01-01
Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217
Adaptive control of turbulence intensity is accelerated by frugal flow sampling.
Quinn, Daniel B; van Halder, Yous; Lentink, David
2017-11-01
The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).
Roy, Nelson; Fetrow, Rebecca A; Merrill, Ray M; Dromey, Christopher
2016-10-01
Vocal hyperfunction, related to abnormal laryngeal muscle activity, is considered the proximal cause of primary muscle tension dysphonia (pMTD). Relative fundamental frequency (RFF) has been proposed as an objective acoustic marker of vocal hyperfunction. This study examined (a) the ability of RFF to track changes in vocal hyperfunction after treatment for pMTD and (b) the influence of dysphonia severity, among other factors, on the feasibility of RFF computation. RFF calculations and dysphonia severity ratings were derived from pre- and posttreatment recordings from 111 women with pMTD and 20 healthy controls. Three vowel-voiceless consonant-vowel stimuli were analyzed. RFF onset slope consistently varied as a function of group (pMTD vs. controls) and time (pretherapy vs. posttherapy). Significant correlations between RFF onset cycle 1 and dysphonia severity were observed. However, in many samples, RFF could not be computed, and adjusted odds ratios revealed that these unanalyzable data were linked to dysphonia severity, phonetic (vowel-voiceless consonant-vowel) context, and group (pMTD vs. control). RFF onset appears to be sensitive to the presence and degree of suspected vocal hyperfunction before and after therapy. The large number of unanalyzable samples (related especially to dysphonia severity in the pMTD group) represents an important limitation.
Teuchmann, K; Totterdell, P; Parker, S K
1999-01-01
Experience sampling methodology was used to examine how work demands translate into acute changes in affective response and thence into chronic response. Seven accountants reported their reactions 3 times a day for 4 weeks on pocket computers. Aggregated analysis showed that mood and emotional exhaustion fluctuated in parallel with time pressure over time. Disaggregated time-series analysis confirmed the direct impact of high-demand periods on the perception of control, time pressure, and mood and the indirect impact on emotional exhaustion. A curvilinear relationship between time pressure and emotional exhaustion was shown. The relationships between work demands and emotional exhaustion changed between high-demand periods and normal working periods. The results suggest that enhancing perceived control may alleviate the negative effects of time pressure.
Nuclear sensor signal processing circuit
Kallenbach, Gene A [Bosque Farms, NM; Noda, Frank T [Albuquerque, NM; Mitchell, Dean J [Tijeras, NM; Etzkin, Joshua L [Albuquerque, NM
2007-02-20
An apparatus and method are disclosed for a compact and temperature-insensitive nuclear sensor that can be calibrated with a non-hazardous radioactive sample. The nuclear sensor includes a gamma ray sensor that generates tail pulses from radioactive samples. An analog conditioning circuit conditions the tail-pulse signals from the gamma ray sensor, and a tail-pulse simulator circuit generates a plurality of simulated tail-pulse signals. A computer system processes the tail pulses from the gamma ray sensor and the simulated tail pulses from the tail-pulse simulator circuit. The nuclear sensor is calibrated under the control of the computer. The offset is adjusted using the simulated tail pulses. Since the offset is set to zero or near zero, the sensor gain can be adjusted with a non-hazardous radioactive source such as, for example, naturally occurring radiation and potassium chloride.
System Administrator for LCS Development Sets
NASA Technical Reports Server (NTRS)
Garcia, Aaron
2013-01-01
The Spaceport Command and Control System Project is creating a Checkout and Control System that will eventually launch the next generation of vehicles from Kennedy Space Center. KSC has a large set of Development and Operational equipment already deployed in several facilities, including the Launch Control Center, which requires support. The position of System Administrator will complete tasks across multiple platforms (Linux/Windows), many of them virtual. The Hardware Branch of the Control and Data Systems Division at the Kennedy Space Center uses system administrators for a variety of tasks. The position of system administrator comes with many responsibilities which include maintaining computer systems, repair or set up hardware, install software, create backups and recover drive images are a sample of jobs which one must complete. Other duties may include working with clients in person or over the phone and resolving their computer system needs. Training is a major part of learning how an organization functions and operates. Taking that into consideration, NASA is no exception. Training on how to better protect the NASA computer infrastructure will be a topic to learn, followed by NASA work polices. Attending meetings and discussing progress will be expected. A system administrator will have an account with root access. Root access gives a user full access to a computer system and or network. System admins can remove critical system files and recover files using a tape backup. Problem solving will be an important skill to develop in order to complete the many tasks.
Estimating risk and rate levels, ratios and differences in case-control studies.
King, Gary; Zeng, Langche
2002-05-30
Classic (or 'cumulative') case-control sampling designs do not admit inferences about quantities of interest other than risk ratios, and then only by making the rare events assumption. Probabilities, risk differences and other quantities cannot be computed without knowledge of the population incidence fraction. Similarly, density (or 'risk set') case-control sampling designs do not allow inferences about quantities other than the rate ratio. Rates, rate differences, cumulative rates, risks, and other quantities cannot be estimated unless auxiliary information about the underlying cohort such as the number of controls in each full risk set is available. Most scholars who have considered the issue recommend reporting more than just risk and rate ratios, but auxiliary population information needed to do this is not usually available. We address this problem by developing methods that allow valid inferences about all relevant quantities of interest from either type of case-control study when completely ignorant of or only partially knowledgeable about relevant auxiliary population information.
Adaptive Control and Parameter Identification of a Doubly-Fed Induction Generator for Wind Power
2011-09-01
Computer Controlled Systems, Theory and Design, Third Edition, Prentice Hall, New Jersey, 1997. [27] R. G. Brown and P. Y.C. Hwang , Introduction to...V n y iT iT , (0.0) with Ts as the sampling interval. From [26], the recursive estimate can be interpreted as a Kalman Filter for the process...by substituting t with n. The recursive equations for the RLS can then be derived from the Kalman filter equations used in [27]: 29 $ $ $ 1 1
Sampled Data Adaptive Digital Computer Control of Surface Ship Maneuvers
1976-06-01
0.53 feet. Systems fcr which fuel considerations are not a motivating 157 factor lay te designed without this part of the control law ta allow finer...COXXXQXxaQXQ«^2Q£>’^ o>- —,>->>>ozor X < a. Ps4 <i i— « aC _J o < a o-*»-» ujOO • •>- o • •oo«mo z o «j II II ** » < ii ii -^ -* -,-^a:- i—— * O.-IUJ
2013-01-01
Objective. This study compared the relationship between computer experience and performance on computerized cognitive tests and a traditional paper-and-pencil cognitive test in a sample of older adults (N = 634). Method. Participants completed computer experience and computer attitudes questionnaires, three computerized cognitive tests (Useful Field of View (UFOV) Test, Road Sign Test, and Stroop task) and a paper-and-pencil cognitive measure (Trail Making Test). Multivariate analysis of covariance was used to examine differences in cognitive performance across the four measures between those with and without computer experience after adjusting for confounding variables. Results. Although computer experience had a significant main effect across all cognitive measures, the effect sizes were similar. After controlling for computer attitudes, the relationship between computer experience and UFOV was fully attenuated. Discussion. Findings suggest that computer experience is not uniquely related to performance on computerized cognitive measures compared with paper-and-pencil measures. Because the relationship between computer experience and UFOV was fully attenuated by computer attitudes, this may imply that motivational factors are more influential to UFOV performance than computer experience. Our findings support the hypothesis that computer use is related to cognitive performance, and this relationship is not stronger for computerized cognitive measures. Implications and directions for future research are provided. PMID:22929395
Remediation of Deficits in Recognition of Facial Emotions in Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Weinger, Paige M.; Depue, Richard A.
2011-01-01
This study evaluated the efficacy of the Mind Reading interactive computer software to remediate emotion recognition deficits in children with autism spectrum disorders (ASD). Six unmedicated children with ASD and 11 unmedicated non-clinical control subjects participated in the study. The clinical sample used the software for five sessions. The…
ERIC Educational Resources Information Center
Freund, Philipp Alexander; Hofer, Stefan; Holling, Heinz
2008-01-01
Figural matrix items are a popular task type for assessing general intelligence (Spearman's g). Items of this kind can be constructed rationally, allowing the implementation of computerized generation algorithms. In this study, the influence of different task parameters on the degree of difficulty in matrix items was investigated. A sample of N =…
Using Technology for Teaching Arabic Language Grammar
ERIC Educational Resources Information Center
Arrabtah, Adel; Nusour, Tayseer
2012-01-01
This study investigates the effect of using technology such as CD-ROM, computers, and internet to teach Arabic language grammar to students at Princess Alia University College at Al-Balqa University. The sample of the study consisted of 122 third year female students; (64) for the experimental group and (58) for the control group. The subjects of…
[The development of an intelligent four-channel aggregometer].
Guan, X; Wang, M
1998-07-01
The paper introduces the hardware and software design of the instrument. We use 89C52 single-chip computer as the microprocessor to control the amplifier, AD and DA conversion chip to realize the sampling, data process, printout and supervision. The final result is printed out in form of data and aggregation curve from PP40 plotter.
Modular Aero-Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Huei
2006-01-01
The Modular Aero-Propulsion System Simulation (MAPSS) is a graphical simulation environment designed for the development of advanced control algorithms and rapid testing of these algorithms on a generic computational model of a turbofan engine and its control system. MAPSS is a nonlinear, non-real-time simulation comprising a Component Level Model (CLM) module and a Controller-and-Actuator Dynamics (CAD) module. The CLM module simulates the dynamics of engine components at a sampling rate of 2,500 Hz. The controller submodule of the CAD module simulates a digital controller, which has a typical update rate of 50 Hz. The sampling rate for the actuators in the CAD module is the same as that of the CLM. MAPSS provides a graphical user interface that affords easy access to engine-operation, engine-health, and control parameters; is used to enter such input model parameters as power lever angle (PLA), Mach number, and altitude; and can be used to change controller and engine parameters. Output variables are selectable by the user. Output data as well as any changes to constants and other parameters can be saved and reloaded into the GUI later.
Line drawing of STS-34 middeck experiment Polymer Morphology (PM)
NASA Technical Reports Server (NTRS)
1989-01-01
STS-34 middeck experiment Polymer Morphology (PM) and its apparatus is illustrated in this line drawing. Apparatus for the experiment, developed by 3M, includes a Fournier transform infrared (FTIR) spectrometer, an automatic sample manipulating system and a process control and data acquisition computer known as the Generic Electronics Module (GEM). STS-34 mission specialists will interface with the PM experiment through a small, NASA-supplied laptop computer that is used as an input and output device for the main PM computer. PM experiment is an organic materials processing experiment designed to explore the effects of microgravity on polymeric materials as they are processed in space and is being conducted by 3M's Space Research and Applications Laboratory.
Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.
1991-01-01
The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.
Analysis of helium-ion scattering with a desktop computer
NASA Astrophysics Data System (ADS)
Butler, J. W.
1986-04-01
This paper describes a program written in an enhanced BASIC language for a desktop computer, for simulating the energy spectra of high-energy helium ions scattered into two concurrent detectors (backward and glancing). The program is designed for 512-channel spectra from samples containing up to 8 elements and 55 user-defined layers. The program is intended to meet the needs of analyses in materials sciences, such as metallurgy, where more than a few elements may be present, where several elements may be near each other in the periodic table, and where relatively deep structure may be important. These conditions preclude the use of completely automatic procedures for obtaining the sample composition directly from the scattered ion spectrum. Therefore, efficient methods are needed for entering and editing large amounts of composition data, with many iterations and with much feedback of information from the computer to the user. The internal video screen is used exclusively for verbal and numeric communications between user and computer. The composition matrix is edited on screen with a two-dimension forms-fill-in text editor and with many automatic procedures, such as doubling the number of layers with appropriate interpolations and extrapolations. The control center of the program is a bank of 10 keys that initiate on-event branching of program flow. The experimental and calculated spectra, including those of individual elements if desired, are displayed on an external color monitor, with an optional inset plot of the depth concentration profiles of the elements in the sample.
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter C.
1989-01-01
The ability of feed-forward neural net architectures to learn continuous-valued mappings in the presence of noise is demonstrated in relation to parameter identification and real-time adaptive control applications. Factors and parameters influencing the learning performance of such nets in the presence of noise are identified. Their effects are discussed through a computer simulation of the Back-Error-Propagation algorithm by taking the example of the cart-pole system controlled by a nonlinear control law. Adequate sampling of the state space is found to be essential for canceling the effect of the statistical fluctuations and allowing learning to take place.
Paliwal, Himanshu; Shirts, Michael R
2013-11-12
Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.
NASA Technical Reports Server (NTRS)
Hazelton, R. C.; Yadlowsky, E. J.; Churchill, R. J.; Parker, L. W.; Sellers, B.
1981-01-01
The effect differential charging of spacecraft thermal control surfaces is assessed by studying the dynamics of the charging process. A program to experimentally validate a computer model of the charging process was established. Time resolved measurements of the surface potential were obtained for samples of Kapton and Teflon irradiated with a monoenergetic electron beam. Results indicate that the computer model and experimental measurements agree well and that for Teflon, secondary emission is the governing factor. Experimental data indicate that bulk conductivities play a significant role in the charging of Kapton.
Ultra-wide Range Gamma Detector System for Search and Locate Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.
2005-10-26
Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less
Assessment of computer-related health problems among post-graduate nursing students.
Khan, Shaheen Akhtar; Sharma, Veena
2013-01-01
The study was conducted to assess computer-related health problems among post-graduate nursing students and to develop a Self Instructional Module for prevention of computer-related health problems in a selected university situated in Delhi. A descriptive survey with co-relational design was adopted. A total of 97 samples were selected from different faculties of Jamia Hamdard by multi stage sampling with systematic random sampling technique. Among post-graduate students, majority of sample subjects had average compliance with computer-related ergonomics principles. As regards computer related health problems, majority of post graduate students had moderate computer-related health problems, Self Instructional Module developed for prevention of computer-related health problems was found to be acceptable by the post-graduate students.
The effect of switch control site on computer skills of infants and toddlers.
Glickman, L; Deitz, J; Anson, D; Stewart, K
1996-01-01
The purpose of this study was to determine whether switch control site (hand vs. head) affects the age at which children can successfully activate a computer to play a cause-and-effect game. The sample consisted of 72 participants randomly divided into two groups (head switch and hand switch), with stratification for gender and age (9-11 months, 12-14 months, 15-17 months). All participants were typically developing. After a maximum of 5 min of training, each participant was given five opportunities to activate a Jelly Bean switch to play a computer game. Competency was defined as four to five successful switch activations. Most participants in the 9-month to 11-month age group could successfully use a hand switch to activate a computer, and for the 15-month to 17-month age group, 100% of the participants met with success. By contrast, in the head switch condition, approximately one third of the participants in each of the three age ranges were successful in activating the computer to play a cause-and-effect game. The findings from this study provide developmental guidelines for using switches (head vs. hand) to activate computers to play cause-and-effect games and suggest that the clinician may consider introducing basic computer and switch skills to children as young as 9 months of age. However, the clinician is cautioned that the head switch may be more difficult to master than the hand switch and that additional research involving children with motor impairments is needed.
Nonlinear feedback method of robot control - A preliminary experimental study
NASA Technical Reports Server (NTRS)
Tarn, T. J.; Ganguly, S.; Li, Z.; Bejczy, A. K.
1990-01-01
The nonlinear feedback method of robot control has been experimentally implemented on two PUMA 560 robot arms. The feasibility of the proposed controller, which was shown viable through simulation results earlier, is stressed. The servomechanism operates in task space, and the nonlinear feedback takes care of the necessary transformations to compute the necessary joint currents. A discussion is presented of the implementation with details of the experiments performed. The performance of the controller is encouraging but was limited to 100-Hz sampling frequency and to derived velocity information at the time of the experimentation. The setup of the lab, the software aspects, results, and the control hardware architecture that has recently been implemented are discussed.
NASA Technical Reports Server (NTRS)
Klemin, Alexander
1937-01-01
An airplane in steady rectilinear flight was assumed to experience an initial disturbance in rolling or yawing velocity. The equations of motion were solved to see if it was possible to hasten recovery of a stable airplane or to secure recovery of an unstable airplane by the application of a single lateral control following an exponential law. The sample computations indicate that, for initial disturbances complex in character, it would be difficult to secure correlation with any type of exponential control. The possibility is visualized that the two-control operation may seriously impair the ability to hasten recovery or counteract instability.
Wilkins, Chris; Casswell, Sally; Barnes, Helen Moewaka; Pledger, Megan
2003-06-01
An intrinsic drawback with the use of a computer-assisted telephone interview (CATI) survey methodology is that people who live in households without a connected landline telephone are excluded from the survey sample. This paper presents a pilot of the feasibility of a computer-assisted cell-phone interview (CACI) methodology designed to survey people living in households without a telephone about alcohol use and be compatible with a larger telephone based alcohol sample. The CACI method was found to be an efficient and cost competitive method to reach non-telephone households. Telephone ownership was found to make a difference to the typical occasion amount of alcohol consumed, with respondents from households without telephones drinking significantly more than those with telephones even when consumption levels were controlled for socio-economic status. Although high levels of telephone ownership in the general population mean these differences may not have any impact on population alcohol measures they may be important in sub-populations where telephone ownership is lower.
Study of a scanning HIFU therapy protocol, Part II: Experiment and results
NASA Astrophysics Data System (ADS)
Andrew, Marilee A.; Kaczkowski, Peter; Cunitz, Bryan W.; Brayman, Andrew A.; Kargl, Steven G.
2003-04-01
Instrumentation and protocols for creating scanned HIFU lesions in freshly excised bovine liver were developed in order to study the in vitro HIFU dose response and validate models. Computer-control of the HIFU transducer and 3-axis positioning system provided precise spatial placement of the thermal lesions. Scan speeds were selected in the range of 1 to 8 mm/s, and the applied electrical power was varied from 20 to 60 W. These parameters were chosen to hold the thermal dose constant. A total of six valid scans of 15 mm length were created in each sample; a 3.5 MHz single-element, spherically focused transducer was used. Treated samples were frozen, then sliced in 1.27 mm increments. Digital photographs of slices were downloaded to computer for image processing and analysis. Lesion characteristics, including the depth within the tissue, axial length, and radial width, were computed. Results were compared with those generated from modified KZK and BHTE models, and include a comparison of the statistical variation in the across-scan lesion radial width. [Work supported by USAMRMC.
An adaptive Cartesian control scheme for manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.
Testing of typical spacecraft materials in a simulated substorm environment
NASA Technical Reports Server (NTRS)
Stevens, N. J.; Berkopec, F. D.; Staskus, J. V.; Blech, R. A.; Narciso, S. J.
1977-01-01
The test specimens were spacecraft paints, silvered Teflon, thermal blankets, and solar array segments. The samples, ranging in size from 300 to 1000 sq cm were exposed to monoenergetic electron energies from 2 to 20 keV at a current density of 1 NA/sq cm. The samples generally behaved as capacitors with strong voltage gradient at their edges. The charging characteristics of the silvered Teflon, Kapton, and solar cell covers were controlled by the secondary emission characteristics. Insulators that did not discharge were the spacecraft paints and the quartz fiber cloth thermal blanket sample. All other samples did experience discharges when the surface voltage reached -8 to -16kV. The discharges were photographed. The breakdown voltage for each sample was determined and the average energy lost in the discharge was computed.
NASA Astrophysics Data System (ADS)
Sugiharti, Gulmah
2018-03-01
This study aims to see the improvement of student learning outcomes by independent learning using computer-based learning media in the course of STBM (Teaching and Learning Strategy) Chemistry. Population in this research all student of class of 2014 which take subject STBM Chemistry as many as 4 class. While the sample is taken by purposive as many as 2 classes, each 32 students, as control class and expriment class. The instrument used is the test of learning outcomes in the form of multiple choice with the number of questions as many as 20 questions that have been declared valid, and reliable. Data analysis techniques used one-sided t test and improved learning outcomes using a normalized gain test. Based on the learning result data, the average of normalized gain values for the experimental class is 0,530 and for the control class is 0,224. The result of the experimental student learning result is 53% and the control class is 22,4%. Hypothesis testing results obtained t count> ttable is 9.02> 1.6723 at the level of significance α = 0.05 and db = 58. This means that the acceptance of Ha is the use of computer-based learning media (CAI Computer) can improve student learning outcomes in the course Learning Teaching Strategy (STBM) Chemistry academic year 2017/2018.
Differences between blood donors and a population sample: implications for case-control studies.
Golding, Jean; Northstone, Kate; Miller, Laura L; Davey Smith, George; Pembrey, Marcus
2013-08-01
Selecting appropriate controls for studies of genetic variation in case series is important. The two major candidates involve the use of blood donors or a random sample of the population. We compare and contrast the two different populations of controls for studies of genetic variation using data from parents enrolled in the Avon Longitudinal Study of Parents and Children (ALSPAC). In addition we compute different biases using a series of hypothetical assumptions. The study subjects who had been blood donors differed markedly from the general population in social, health-related, anthropometric, and personality-related variables. Using theoretical examples, we show that blood donors are a poor control group for non-genetic studies of diseases related to environmentally, behaviourally, or socially patterned exposures. However, we show that if blood donors are used as controls in genetic studies, these factors are unlikely to make a major difference in detecting true associations with relatively rare disorders (cumulative incidence through life of <10%). Nevertheless, for more common disorders, the reduction in accuracy resulting from the inclusion in any control population of individuals who have or will develop the disease in question can create a greater bias than can socially patterned factors. Information about the medical history of a control and the parents of the control (as a proxy for whether the control will develop the disease) is more important with regard to the choice of controls than whether the controls are a random population sample or blood donors.
Adaptive hybrid control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
Simple methods for the design of adaptive force and position controllers for robot manipulators within the hybrid control architecuture is presented. The force controller is composed of an adaptive PID feedback controller, an auxiliary signal and a force feedforward term, and it achieves tracking of desired force setpoints in the constraint directions. The position controller consists of adaptive feedback and feedforward controllers and an auxiliary signal, and it accomplishes tracking of desired position trajectories in the free directions. The controllers are capable of compensating for dynamic cross-couplings that exist between the position and force control loops in the hybrid control architecture. The adaptive controllers do not require knowledge of the complex dynamic model or parameter values of the manipulator or the environment. The proposed control schemes are computationally fast and suitable for implementation in on-line control with high sampling rates.
Salt, Julián; Cuenca, Ángel; Palau, Francisco; Dormido, Sebastián
2014-01-01
In many control applications, the sensor technology used for the measurement of the variable to be controlled is not able to maintain a restricted sampling period. In this context, the assumption of regular and uniform sampling pattern is questionable. Moreover, if the control action updating can be faster than the output measurement frequency in order to fulfill the proposed closed loop behavior, the solution is usually a multirate controller. There are some known aspects to be careful of when a multirate system (MR) is going to be designed. The proper multiplicity between input-output sampling periods, the proper controller structure, the existence of ripples and others issues need to be considered. A useful way to save time and achieve good results is to have an assisted computer design tool. An interactive simulation tool to deal with MR seems to be the right solution. In this paper this kind of simulation application is presented. It allows an easy understanding of the performance degrading or improvement when changing the multirate sampling pattern parameters. The tool was developed using Sysquake, a Matlab-like language with fast execution and powerful graphic facilities. It can be delivered as an executable. In the paper a detailed explanation of MR treatment is also included and the design of four different MR controllers with flexible structure to be adapted to different schemes will also be presented. The Smith's predictor in these MR schemes is also explained, justified and used when time delays appear. Finally some interesting observations achieved using this interactive tool are included. PMID:24583971
Stormwater-runoff data, Madison, Wisconsin, 1993-94
Waschbusch, R.J.
1996-01-01
As required by Section 402(P) of the Water Quality Control Act of 1987, stormwater-runoff samples collected during storms that met three criteria (rainfall depths 50 to 150 percent of average depth range, rainfall durations 50 to 150 percent of average duration, and antecedent dry-weather period of at least 72 hours) were analyzed for semivolatile organic chemicals, total metals, pesticides, polychlorinated biphenyls, inorganic constituents, bacteria, oil and grease, pH, and water temperature. Two of the seven sites also had samples analyzed for volatile organic chemicals. In addition to the required sampling, additional runoff samples that did not necessarily meet the three rainfall criteria, were analyzed for total metals and inorganic constituents. Storm loads of selected constituents were computed.
Chaos: Understanding and Controlling Laser Instability
NASA Technical Reports Server (NTRS)
Blass, William E.
1997-01-01
In order to characterize the behavior of tunable diode lasers (TDL), the first step in the project involved the redesign of the TDL system here at the University of Tennessee Molecular Systems Laboratory (UTMSL). Having made these changes it was next necessary to optimize the new optical system. This involved the fine adjustments to the optical components, particularly in the monochromator, to minimize the aberrations of coma and astigmatism and to assure that the energy from the beam is focused properly on the detector element. The next step involved the taking of preliminary data. We were then ready for the analysis of the preliminary data. This required the development of computer programs that use mathematical techniques to look for signatures of chaos. Commercial programs were also employed. We discovered some indication of high dimensional chaos, but were hampered by the low sample rate of 200 KSPS (kilosamples/sec) and even more by our sample size of 1024 (1K) data points. These limitations were expected and we added a high speed data acquisition board. We incorporated into the system a computer with a 40 MSPS (million samples/sec) data acquisition board. This board can also capture 64K of data points so that were then able to perform the more accurate tests for chaos. The results were dramatic and compelling, we had demonstrated that the lead salt diode laser had a chaotic frequency output. Having identified the chaotic character in our TDL data, we proceeded to stage two as outlined in our original proposal. This required the use of an Occasional Proportional Feedback (OPF) controller to facilitate the control and stabilization of the TDL system output. The controller was designed and fabricated at GSFC and debugged in our laboratories. After some trial and error efforts, we achieved chaos control of the frequency emissions of the laser. The two publications appended to this introduction detail the entire project and its results.
NASA Astrophysics Data System (ADS)
Saponara, M.; Tramutola, A.; Creten, P.; Hardy, J.; Philippe, C.
2013-08-01
Optimization-based control techniques such as Model Predictive Control (MPC) are considered extremely attractive for space rendezvous, proximity operations and capture applications that require high level of autonomy, optimal path planning and dynamic safety margins. Such control techniques require high-performance computational needs for solving large optimization problems. The development and implementation in a flight representative avionic architecture of a MPC based Guidance, Navigation and Control system has been investigated in the ESA R&T study “On-line Reconfiguration Control System and Avionics Architecture” (ORCSAT) of the Aurora programme. The paper presents the baseline HW and SW avionic architectures, and verification test results obtained with a customised RASTA spacecraft avionics development platform from Aeroflex Gaisler.
Franzmeier, N; Caballero, M Á Araque; Taylor, A N W; Simon-Vermot, L; Buerger, K; Ertl-Wagner, B; Mueller, C; Catak, C; Janowitz, D; Baykara, E; Gesierich, B; Duering, M; Ewers, M
2017-04-01
Cognitive reserve (CR) shows protective effects in Alzheimer's disease (AD) and reduces the risk of dementia. Despite the clinical significance of CR, a clinically useful diagnostic biomarker of brain changes underlying CR in AD is not available yet. Our aim was to develop a fully-automated approach applied to fMRI to produce a biomarker associated with CR in subjects at increased risk of AD. We computed resting-state global functional connectivity (GFC), i.e. the average connectivity strength, for each voxel within the cognitive control network, which may sustain CR due to its central role in higher cognitive function. In a training sample including 43 mild cognitive impairment (MCI) subjects and 24 healthy controls (HC), we found that MCI subjects with high CR (> median of years of education, CR+) showed increased frequency of high GFC values compared to MCI-CR- and HC. A summary index capturing such a surplus frequency of high GFC was computed (called GFC reserve (GFC-R) index). GFC-R discriminated MCI-CR+ vs. MCI-CR-, with the area under the ROC = 0.84. Cross-validation in an independently recruited test sample of 23 MCI subjects showed that higher levels of the GFC-R index predicted higher years of education and an alternative questionnaire-based proxy of CR, controlled for memory performance, gray matter of the cognitive control network, white matter hyperintensities, age, and gender. In conclusion, the GFC-R index that captures GFC changes within the cognitive control network provides a biomarker candidate of functional brain changes of CR in patients at increased risk of AD.
Network Model-Assisted Inference from Respondent-Driven Sampling Data
Gile, Krista J.; Handcock, Mark S.
2015-01-01
Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328
Network Model-Assisted Inference from Respondent-Driven Sampling Data.
Gile, Krista J; Handcock, Mark S
2015-06-01
Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.
Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.
Gupte, M D; Narasimhamurthy, B
1999-06-01
In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.
Health status and health dynamics in an empirical model of expected longevity.
Benítez-Silva, Hugo; Ni, Huan
2008-05-01
Expected longevity is an important factor influencing older individuals' decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman [Grossman, M., 1972. On the concept of health capital and demand for health. Journal of Political Economy 80, 223-255] has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics.
Development of advanced control schemes for telerobot manipulators
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1991-01-01
To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed.
Performance evaluation of a six-axis generalized force-reflecting teleoperator
NASA Technical Reports Server (NTRS)
Hannaford, B.; Wood, L.; Guggisberg, B.; Mcaffee, D.; Zak, H.
1989-01-01
Work in real-time distributed computation and control has culminated in a prototype force-reflecting telemanipulation system having a dissimilar master (cable-driven, force-reflecting hand controller) and a slave (PUMA 560 robot with custom controller), an extremely high sampling rate (1000 Hz), and a low loop computation delay (5 msec). In a series of experiments with this system and five trained test operators covering over 100 hours of teleoperation, performance was measured in a series of generic and application-driven tasks with and without force feedback, and with control shared between teleoperation and local sensor referenced control. Measurements defining task performance included 100-Hz recording of six-axis force/torque information from the slave manipulator wrist, task completion time, and visual observation of predefined task errors. The task consisted of high precision peg-in-hole insertion, electrical connectors, velcro attach-de-attach, and a twist-lock multi-pin connector. Each task was repeated three times under several operating conditions: normal bilateral telemanipulation, forward position control without force feedback, and shared control. In shared control, orientation was locally servo controlled to comply with applied torques, while translation was under operator control. All performance measures improved as capability was added along a spectrum of capabilities ranging from pure position control through force-reflecting teleoperation and shared control. Performance was optimal for the bare-handed operator.
Aspects of CO2 laser engraving of printing cylinders.
Atanasov, P A; Maeno, K; Manolov, V P
1999-03-20
Results of the experimental and theoretical investigations of CO(2) laser-engraved cylinders are presented. The processed surfaces of test samples are examined by a phase-stepping laser interferometer, digital microscope, and computer-controlled profilometer. Fourier analysis is made on the patterns parallel to the axis of the laser-scribed test ceramic cylinders. The problem of the visually observed banding is discussed.
X-ray computed tomography imaging: A not-so-nondestructive technique
NASA Astrophysics Data System (ADS)
Sears, Derek W. G.; Sears, Hazel; Ebel, Denton S.; Wallace, Sean; Friedrich, Jon M.
2016-04-01
X-ray computed tomography has become a popular means for examining the interiors of meteorites and has been advocated for routine curation and for the examination of samples returned by missions. Here, we report the results of a blind test that indicate that CT imaging deposits a considerable radiation dose in a meteorite and seriously compromises its natural radiation record. Ten vials of the Bruderheim L6 chondrite were placed in CT imager and exposed to radiation levels typical for meteorite studies. Half were retained as controls. Their thermoluminescence (TL) properties were then measured in a blind test. Five of the samples had TL data unaltered from their original (~10 cps) while five had very strong signals (~20,000 cps). It was therefore very clear which samples had been in the CT scanner. For comparison, the natural TL signal from Antarctic meteorites is ~5000-50,000 cps. Using the methods developed for Antarctic meteorites, the apparent dose absorbed by the five test samples was calculated to be 83 ± 5 krad, comparable with the highest doses observed in Antarctic meteorites and freshly fallen meteorites. While these results do not preclude the use of CT scanners when scientifically justified, it should be remembered that the record of radiation exposure to ionizing radiations for the sample will be destroyed and that TL, or the related optically stimulated luminescence, are the primary modern techniques for radiation dosimetry. This is particularly important with irreplaceable samples, such as meteorite main masses, returned samples, and samples destined for archive.
Iannaccone, Reto; Brem, Silvia; Walitza, Susanne
2017-01-01
Patients with obsessive-compulsive disorder (OCD) can be described as cautious and hesitant, manifesting an excessive indecisiveness that hinders efficient decision making. However, excess caution in decision making may also lead to better performance in specific situations where the cost of extended deliberation is small. We compared 16 juvenile OCD patients with 16 matched healthy controls whilst they performed a sequential information gathering task under different external cost conditions. We found that patients with OCD outperformed healthy controls, winning significantly more points. The groups also differed in the number of draws required prior to committing to a decision, but not in decision accuracy. A novel Bayesian computational model revealed that subjective sampling costs arose as a non-linear function of sampling, closely resembling an escalating urgency signal. Group difference in performance was best explained by a later emergence of these subjective costs in the OCD group, also evident in an increased decision threshold. Our findings present a novel computational model and suggest that enhanced information gathering in OCD can be accounted for by a higher decision threshold arising out of an altered perception of costs that, in some specific contexts, may be advantageous. PMID:28403139
Recent CFD Simulations of Shuttle Orbiter Contingency Abort Aerodynamics
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Prabhu, Dinesh; Wright, Michael; Davies, Carol; McDaniel, Ryan; Venkatapathy, Ethiraj; Wersinski, Paul; Gomez, Reynaldo; Arnold, Jim (Technical Monitor)
2001-01-01
Modern Computational Fluid Dynamics (CFD) techniques were used to compute aerodynamic forces and moments of the Space Shuttle Orbiter in specific portions of contingency abort trajectory space. The trajectory space covers a Mach number range of 3.5-15, an angle-of-attack range of 20-60 degrees, an altitude range of 100-190 kft, and several different settings of the control surfaces (elevons, body flap, and speed brake). While approximately 40 cases have been computed, only a sampling of the results is presented here. The computed results, in general, are in good agreement with the Orbiter Operational Aerodynamic Data Book (OADB) data (i.e., within the uncertainty bands) for almost all the cases. However, in a limited number of high angle-of-attack cases (at Mach 15), there are significant differences between the computed results, especially the vehicle pitching moment, and the OADB data. A preliminary analysis of the data from the CFD simulations at Mach 15 shows that these differences can be attributed to real-gas/Mach number effects.
Promoting Physical Activity through Hand-Held Computer Technology
King, Abby C.; Ahn, David K.; Oliveira, Brian M.; Atienza, Audie A.; Castro, Cynthia M.; Gardner, Christopher D.
2009-01-01
Background Efforts to achieve population-wide increases in walking and similar moderate-intensity physical activities potentially can be enhanced through relevant applications of state-of-the-art interactive communication technologies. Yet few systematic efforts to evaluate the efficacy of hand-held computers and similar devices for enhancing physical activity levels have occurred. The purpose of this first-generation study was to evaluate the efficacy of a hand-held computer (i.e., personal digital assistant [PDA]) for increasing moderate intensity or more vigorous (MOD+) physical activity levels over 8 weeks in mid-life and older adults relative to a standard information control arm. Design Randomized, controlled 8-week experiment. Data were collected in 2005 and analyzed in 2006-2007. Setting/Participants Community-based study of 37 healthy, initially underactive adults aged 50 years and older who were randomized and completed the 8-week study (intervention=19, control=18). Intervention Participants received an instructional session and a PDA programmed to monitor their physical activity levels twice per day and provide daily and weekly individualized feedback, goal setting, and support. Controls received standard, age-appropriate written physical activity educational materials. Main Outcome Measure Physical activity was assessed via the Community Healthy Activities Model Program for Seniors (CHAMPS) questionnaire at baseline and 8 weeks. Results Relative to controls, intervention participants reported significantly greater 8-week mean estimated caloric expenditure levels and minutes per week in MOD+ activity (p<0.04). Satisfaction with the PDA was reasonably high in this largely PDA-naive sample. Conclusions Results from this first-generation study indicate that hand-held computers may be effective tools for increasing initial physical activity levels among underactive adults. PMID:18201644
NASA Astrophysics Data System (ADS)
Wietsma, T.; Minsker, B. S.
2012-12-01
Increased sensor throughput combined with decreasing hardware costs has led to a disruptive growth in data volume. This disruption, popularly termed "the data deluge," has placed new demands for cyberinfrastructure and information technology skills among researchers in many academic fields, including the environmental sciences. Adaptive sampling has been well established as an effective means of improving network resource efficiency (energy, bandwidth) without sacrificing sample set quality relative to traditional uniform sampling. However, using adaptive sampling for the explicit purpose of improving resolution over events -- situations displaying intermittent dynamics and unique hydrogeological signatures -- is relatively new. In this paper, we define hot spots and hot moments in terms of sensor signal activity as measured through discrete Fourier analysis. Following this frequency-based approach, we apply the Nyquist-Shannon sampling theorem, a fundamental contribution from signal processing that led to the field of information theory, for analysis of uni- and multivariate environmental signal data. In the scope of multi-scale environmental sensor networks, we present several sampling control algorithms, derived from the Nyquist-Shannon theorem, that operate at local (field sensor), regional (base station for aggregation of field sensor data), and global (Cloud-based, computationally intensive models) scales. Evaluated over soil moisture data, results indicate significantly greater sample density during precipitation events while reducing overall sample volume. Using these algorithms as indicators rather than control mechanisms, we also discuss opportunities for spatio-temporal modeling as a tool for planning/modifying sensor network deployments. Locally adaptive model based on Nyquist-Shannon sampling theorem Pareto frontiers for local, regional, and global models relative to uniform sampling. Objectives are (1) overall sampling efficiency and (2) sampling efficiency during hot moments as identified using heuristic approach.
Sagawa, Motoyasu; Nakayama, Tomio; Tanaka, Makoto; Sakuma, Tsutomu; Sobue, Tomotaka
2012-12-01
In order to assess the efficacy of lung cancer screening using low-dose thoracic computed tomography, compared with chest roentgenography, in people aged 50-64 years with a smoking history of <30 pack-years, a randomized controlled trial is being conducted in Japan. The screening methods are randomly assigned individually. The duration of this trial is 10 years. In the intervention arm, low-dose thoracic computed tomography is performed for each participant in the first and the sixth years. In the control arm, chest roentgenography is performed for each participant in the first year. The participants in both arms are also encouraged to receive routine lung cancer screening using chest roentgenography annually. The interpretation of radiological findings and the follow-up of undiagnosed nodules are to be carried out according to the guidelines published in Japan. The required sample size is calculated to be 17 500 subjects for each arm.
Botrel, L; Acqualagna, L; Blankertz, B; Kübler, A
2017-11-01
Brain computer interfaces (BCIs) allow for controlling devices through modulation of sensorimotor rhythms (SMR), yet a profound number of users is unable to achieve sufficient accuracy. Here, we investigated if visuo-motor coordination (VMC) training or Jacobsen's progressive muscle relaxation (PMR) prior to BCI use would increase later performance compared to a control group who performed a reading task (CG). Running the study in two different BCI-labs, we achieved a joint sample size of N=154 naïve participants. No significant effect of either intervention (VMC, PMR, control) was found on resulting BCI performance. Relaxation level and visuo-motor performance were associated with later BCI performance in one BCI-lab but not in the other. These mixed results do not indicate a strong potential of VMC or PMR for boosting performance. Yet further research with different training parameters or experimental designs is needed to complete the picture. Copyright © 2017 Elsevier B.V. All rights reserved.
Hazard Control Extensions in a COTS Based Data Handling System
NASA Astrophysics Data System (ADS)
Vogel, Torsten; Rakers, Sven; Gronowski, Matthias; Schneegans, Joachim
2011-08-01
EML is an electromagnetic levitator for containerless processing of conductive samples on the International Space Station. This material sciences experiment is running in the European Drawer Rack (EDR) facility. The objective of this experiment is to gain insight into the parameters of liquid metal samples and their crystallisation processes without the influence of container walls. To this end the samples are electromagnetically positioned in a coil system and then heated up beyond their melting point in an ultraclean environment.The EML programme is currently under development by Astrium Space Transportation in Friedrichshafen and Bremen; jointly funded by ESA and DLR (on behalf of BMWi, contract 50WP0808). EML consists of four main modules listed in Table 1. The paper focuses mainly on the architecture and design of the ECM module and its contribution to a safe operation of the experiment. The ECM is a computer system that integrates the power supply to the EML experiment, control functions and video handling and compression features. Experiment control is performed by either telecommand or the execution of predefined experiment scripts.
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
Neural computing thermal comfort index PMV for the indoor environment intelligent control system
NASA Astrophysics Data System (ADS)
Liu, Chang; Chen, Yifei
2013-03-01
Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.
Computer generated hologram from point cloud using graphics processor.
Chen, Rick H-Y; Wilkinson, Timothy D
2009-12-20
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
NASA Technical Reports Server (NTRS)
Geyser, L. C.
1978-01-01
A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.
Intentions of hospital nurses to work with computers: based on the theory of planned behavior.
Shoham, Snunith; Gonen, Ayala
2008-01-01
The purpose of this study was to determine registered nurses' attitudes related to intent to use computers in the hospital setting as a predictor of their future behavior. The study was further aimed at identifying the relationship between these attitudes and selected sociological, professional, and personal factors and to describe a research model integrating these various factors. The study was based on the theory of planned behavior. A random sample of 411 registered nurses was selected from a single large medical center in Israel. The study tool was a Likert-style questionnaire. Nine different indices were used: (1) behavioral intention toward computer use; (2) general attitudes toward computer use; (3) nursing attitudes toward computer use; (4) threat involved in computer use; (5) challenge involved in computer use; (6) organizational climate; (7) departmental climate; (8) attraction to technological innovations/innovativeness; (9) self-efficacy, ability to control behavior. Strong significant positive correlations were found between the nurses' attitudes (general attitudes and nursing attitudes), self-efficacy, innovativeness, and intentions to use computers. Higher correlations were found between departmental climate and attitudes than between organizational climate and attitudes. The threat and challenge that are involved in computer use were shown as important mediating variables to the understanding of the process of predicting attitudes and intentions toward using computers.
Kämpf, Uwe; Shamshinova, Angelika; Kaschtschenko, Tamara; Mascolus, Wilfried; Pillunat, Lutz; Haase, Wolfgang
2008-01-01
The paper presents selected results of a prospective multicenter study. The reported study was aimed at the evaluation of a software-based stimulation method of computer training applied in addition to occlusion as a complementary treatment for therapy-resistant cases of amblyopia. The stimulus was a drifting sinusoidal grating of a spatial frequency of 0.3 cyc/deg and a temporal frequency of 1 cyc/sec, reciprocally coordinated with each other to a drift of 0.33 deg/sec. This pattern was implemented as a background stimulus into simple computer games to bind attention by sensory-motor coordination tasks. According to an earlier proposed hypothesis, the stimulation aims at the provocation of stimulus-induced phase-coupling in order to contribute to the refreshment of synchronization and coordination processes in the visual transmission channels. To assess the outcome of the therapy, we studied the development of the visual acuity during a period of 6 months. Our cooperating partners of this prospective multicenter study were strabologic departments in ophthalmic clinics and private practices as well. For the issue of therapy control, a partial sample of 55 patients from an overall sample of 198 patients was selected, according to the criterion of strong therapy resistance. The visual acuity was increased about two logarithmic steps by an occlusion combined with computer training in addition to the earlier obtained gain of the same amount by occlusion alone. Recalculated relatively to the duration of the therapy periods, the computer training combined with occlusion was found to be about twice as effective as the preceding occlusion alone. The results of combined computer training and occlusion show an additional increase of the same amount as the preceding occlusion alone, which yielded at its end no further advantage to the development of visual acuity in the selected sample of our 55 therapy-resistant patients. In a concluding theoretical note, a preliminary hypothesis about the neuronal mechanisms of the stimulus-induced treatment effect is discussed.
The VLBA correlator: Real-time in the distributed era
NASA Technical Reports Server (NTRS)
Wells, D. C.
1992-01-01
The correlator is the signal processing engine of the Very Long Baseline Array (VLBA). Radio signals are recorded on special wideband (128 Mb/s) digital recorders at the 10 telescopes, with sampling times controlled by hydrogen maser clocks. The magnetic tapes are shipped to the Array Operations Center in Socorro, New Mexico, where they are played back simultaneously into the correlator. Real-time software and firmware controls the playback drives to achieve synchronization, compute models of the wavefront delay, control the numerous modules of the correlator, and record FITS files of the fringe visibilities at the back-end of the correlator. In addition to the more than 3000 custom VLSI chips which handle the massive data flow of the signal processing, the correlator contains a total of more than 100 programmable computers, 8-, 16- and 32-bit CPUs. Code is downloaded into front-end CPU's dependent on operating mode. Low-level code is assembly language, high-level code is C running under a RT OS. We use VxWorks on Motorola MVME147 CPU's. Code development is on a complex of SPARC workstations connected to the RT CPU's by Ethernet. The overall management of the correlation process is dependent on a database management system. We use Ingres running on a Sparcstation-2. We transfer logging information from the database of the VLBA Monitor and Control System to our database using Ingres/NET. Job scripts are computed and are transferred to the real-time computers using NFS, and correlation job execution logs and status flow back by the route. Operator status and control displays use windows on workstations, interfaced to the real-time processes by network protocols. The extensive network protocol support provided by VxWorks is invaluable. The VLBA Correlator's dependence on network protocols is an example of the radical transformation of the real-time world over the past five years. Real-time is becoming more like conventional computing. Paradoxically, 'conventional' computing is also adopting practices from the real-time world: semaphores, shared memory, light-weight threads, and concurrency. This appears to be a convergence of thinking.
Material Outgassing, Identification and Deposition, MOLIDEP System
NASA Technical Reports Server (NTRS)
Scialdone, John J.; Montoya, Alex F.
2002-01-01
The outgassing tests are performed employing a modified vacuum operated Cahn analytical microbalance, identified as the MOLIDEP system. The test measures under high vacuum, the time varying Molecular mass loss of a material sample held at a chosen temperature; it Identifies the outgassing molecular components using an inline SRS 300 amu Residual Gas Analyzer (RGA) and employs a temperature controlled 10 MHz Quartz Crystal Microbalance (QCM) to measure the condensable DEPosits. Both the QCM and the RGA intercept within the conductive passage the outgassing products being evacuated by a turbomolecular pump. The continuous measurements of the mass loss, the rate of loss, the sample temperature, the rate of temperature change, the QCM temperature and the QCM recorded condensable deposits or rate of deposits are saved to an Excel spreadsheet. A separate computer controls the RGA.
Makrlíková, Anna; Opekar, František; Tůma, Petr
2015-08-01
A computer-controlled hydrodynamic sample introduction method has been proposed for short-capillary electrophoresis. In the method, the BGE flushes sample from the loop of a six-way sampling valve and is carried to the injection end of the capillary. A short pressure impulse is generated in the electrolyte stream at the time when the sample zone is at the capillary, leading to injection of the sample into the capillary. Then the electrolyte flow is stopped and the separation voltage is turned on. This way of sample introduction does not involve movement of the capillary and both of its ends remain constantly in the solution during both sample injection and separation. The amount of sample introduced to the capillary is controlled by the duration of the pressure pulse. The new sample introduction method was tested in the determination of ammonia, creatinine, uric acid, and hippuric acid in human urine. The determination was performed in a capillary with an overall length of 10.5 cm, in two BGEs with compositions 50 mM MES + 5 mM NaOH (pH 5.1) and 1 M acetic acid + 1.5 mM crown ether 18-crown-6 (pH 2.4). A dual contactless conductivity/UV spectrometric detector was used for the detection. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
Shultzaberger, Ryan K.; Paddock, Mark L.; Katsuki, Takeo; Greenspan, Ralph J.; Golden, Susan S.
2016-01-01
The temporal measurement of a bioluminescent reporter has proven to be one of the most powerful tools for characterizing circadian rhythms in the cyanobacterium Synechococcus elongatus. Primarily, two approaches have been used to automate this process: (1) detection of cell culture bioluminescence in 96-well plates by a photomultiplier tube-based plate-cycling luminometer (TopCount Microplate Scintillation and Luminescence Counter, Perkin Elmer) and (2) detection of individual colony bioluminescence by iteratively rotating a Petri dish under a cooled CCD camera using a computer-controlled turntable. Each approach has distinct advantages. The TopCount provides a more quantitative measurement of bioluminescence, enabling the direct comparison of clock output levels among strains. The computer-controlled turntable approach has a shorter set-up time and greater throughput, making it a more powerful phenotypic screening tool. While the latter approach is extremely useful, only a few labs have been able to build such an apparatus because of technical hurdles involved in coordinating and controlling both the camera and the turntable, and in processing the resulting images. This protocol provides instructions on how to construct, use, and process data from a computer-controlled turntable to measure the temporal changes in bioluminescence of individual cyanobacterial colonies. Furthermore, we describe how to prepare samples for use with the TopCount to minimize experimental noise, and generate meaningful quantitative measurements of clock output levels for advanced analysis. PMID:25662451
47 CFR 73.151 - Field strength measurements to establish performance of directional antennas.
Code of Federal Regulations, 2010 CFR
2010-10-01
... verified either by field strength measurement or by computer modeling and sampling system verification. (a... specifically identified by the Commission. (c) Computer modeling and sample system verification of modeled... performance verified by computer modeling and sample system verification. (1) A matrix of impedance...
2010-01-01
property variations. The system described here is a simple 4-electrode microfluidic device made of polydimethylsiloxane PDMS [50-53] which is reversibly...through the fluid and heat it.) A more detailed description and analysis of the physics of electroosmotic actuation can be found in [46, 83] In...a control algorithm on a standard personal computer. The micro-fluidic device is made out of a soft polymer ( polydimethylsiloxane (PDMS)) and is
Flight-test experience in digital control of a remotely piloted vehicle.
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1972-01-01
The development of a remotely piloted vehicle system consisting of a remote pilot cockpit and a ground-based digital computer coupled to the aircraft through telemetry data links is described. The feedback control laws are implemented in a FORTRAN program. Flight-test experience involving high feedback gain limits for attitude and attitude rate feedback variables, filtering of sampled data, and system operation during intermittent telemetry data link loss is discussed. Comparisons of closed-loop flight tests with analytical calculations, and pilot comments on system operation are included.
NASA Astrophysics Data System (ADS)
Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.
The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.
NASA Astrophysics Data System (ADS)
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Spencer, Andrea E; Faraone, Stephen V; Bogucki, Olivia E; Pope, Amanda L; Uchida, Mai; Milad, Mohammed R; Spencer, Thomas J; Woodworth, K Yvonne; Biederman, Joseph
2016-01-01
To conduct a systematic review and meta-analysis examining the relationship between attention-deficit/hyperactivity disorder (ADHD) and posttraumatic stress disorder (PTSD). We reviewed literature through PubMed and PsycINFO without a specified date range, utilizing the search (posttraumatic stress disorder OR PTSD) AND (ADHD OR attention deficit hyperactivity disorder OR ADD OR attention deficit disorder OR hyperkinetic syndrome OR minimal brain dysfunction). References from relevant articles were reviewed. We identified 402 articles; 28 met criteria. We included original human research in English that operationalized diagnoses of ADHD and PTSD, evaluated the relationship between the disorders, and included controls. We excluded articles that failed to differentiate ADHD or PTSD from nonspecific or subsyndromal deficits or failed to compare their relationship. We extracted sample size, age, diagnostic methods, design, referral status, control type, and number of subjects with and without ADHD and PTSD alone and combined. We computed meta-analyses for 22 studies examining ADHD in PTSD and PTSD in ADHD using a random effects model and meta-analytic regression. We assessed for heterogeneity and publication bias and adjusted for intrastudy clustering. The relative risk (RR) for PTSD in ADHD was 2.9 (P < .0005); in samples using healthy controls, the RR was 3.7 (P = .001); and in samples using traumatized controls, the RR was 1.6 (P = .003). The RR for ADHD in PTSD was 1.7 (P < .0005); in samples using traumatized controls, the RR was 2.1 (P < .0005). The association was not significant in samples using psychiatric controls. Results indicate a bidirectional association between ADHD and PTSD, suggesting clinical implications and highlighting the need for neurobiological research that examines the mechanisms underlying this connection. © Copyright 2015 Physicians Postgraduate Press, Inc.
MTHFR c.677C>T is a risk factor for non-syndromic cleft lip with or without cleft palate in Chile.
Ramírez-Chau, C; Blanco, R; Colombo, A; Pardo, R; Suazo, J
2016-10-01
The functional variant within the 5,10-methylenetetrahydrofolate reductase (MTHFR) gene c.677C>T, producing alterations in folate metabolism, has been associated with the risk of non-syndromic cleft lip with or without cleft palate (NSCL/P). We assessed this association in a Chilean population using a combined analysis of case-control and case-parent trio samples. Samples of 165 cases and 291 controls and 121 case-parent trios (sharing the cases) were genotyped. Odds ratio (OR) was estimated for case-control (allele and genotype frequency differences), and this result was confirmed by allele transmission distortion in trios. Due to that these samples are not independent, a combined OR was also computed. Maternal genotype effect was additionally evaluated based on a log-linear method. Borderline but not significant OR (1.28; CI 0.97-1.69) was observed for risk allele (T) in the case-control sample. However, triad sample showed a significant association (OR 1.56: CI 1.09-2.25) which was confirmed by the combined OR (1.37; CI 1.11-1.71). Maternal genotype has been also associated with the phenotype (P = 0.002). In contrast to previous reports considering Chilean subjects, our results demonstrated that the offspring and maternal genotypes for MTHFR c.677C>T variant are strongly associated with NSCL/P in this Chilean population. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
An automated atmospheric sampling system operating on 747 airliners
NASA Technical Reports Server (NTRS)
Perkins, P.; Gustafsson, U. R. C.
1975-01-01
An air sampling system that automatically measures the temporal and spatial distribution of selected particulate and gaseous constituents of the atmosphere has been installed on a number of commercial airliners and is collecting data on commercial air routes covering the world. Measurements of constituents related to aircraft engine emissions and other pollutants are made in the upper troposphere and lower stratosphere (6 to 12 km) in support of the Global Air Sampling Program (GASP). Aircraft operated by different airlines sample air at latitudes from the Arctic to Australia. This system includes specialized instrumentation for measuring carbon monoxide, ozone, water vapor, and particulates, a special air inlet probe for sampling outside air, a computerized automatic control, and a data acquisition system. Air constituents and related flight data are tape recorded in flight for later computer processing on the ground.
A software tool for modeling and simulation of numerical P systems.
Buiu, Catalin; Arsene, Octavian; Cipu, Corina; Patrascu, Monica
2011-03-01
A P system represents a distributed and parallel bio-inspired computing model in which basic data structures are multi-sets or strings. Numerical P systems have been recently introduced and they use numerical variables and local programs (or evolution rules), usually in a deterministic way. They may find interesting applications in areas such as computational biology, process control or robotics. The first simulator of numerical P systems (SNUPS) has been designed, implemented and made available to the scientific community by the authors of this paper. SNUPS allows a wide range of applications, from modeling and simulation of ordinary differential equations, to the use of membrane systems as computational blocks of cognitive architectures, and as controllers for autonomous mobile robots. This paper describes the functioning of a numerical P system and presents an overview of SNUPS capabilities together with an illustrative example. SNUPS is freely available to researchers as a standalone application and may be downloaded from a dedicated website, http://snups.ics.pub.ro/, which includes an user manual and sample membrane structures. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Adler, Georg; Lembach, Yvonne
2015-08-01
Cognitive impairments may have a severe impact on everyday functioning and quality of life of patients with multiple sclerosis (MS). However, there are some methodological problems in the assessment and only a few studies allow a representative estimate of the prevalence and severity of cognitive impairments in MS patients. We applied a computer-based method, the memory and attention test (MAT), in 531 outpatients with MS, who were assessed at nine neurological practices or specialized outpatient clinics. The findings were compared with those obtained in an age-, sex- and education-matched control group of 84 healthy subjects. Episodic short-term memory was substantially decreased in the MS patients. About 20% of them reached a score of only less than two standard deviations below the mean of the control group. The episodic short-term memory score was negatively correlated with the EDSS score. Minor but also significant impairments in the MS patients were found for verbal short-term memory, episodic working memory and selective attention. The computer-based MAT was found to be useful for a routine assessment of cognition in MS outpatients.
Computerized Measurement of Negative Symptoms in Schizophrenia
Cohen, Alex S.; Alpert, Murray; Nienow, Tasha M.; Dinzeo, Thomas J.; Docherty, Nancy M.
2008-01-01
Accurate measurement of negative symptoms is crucial for understanding and treating schizophrenia. However, current measurement strategies are reliant on subjective symptom rating scales which often have psychometric and practical limitations. Computerized analysis of patients’ speech offers a sophisticated and objective means of evaluating negative symptoms. The present study examined the feasibility and validity of using widely-available acoustic and lexical-analytic software to measure flat affect, alogia and anhedonia (via positive emotion). These measures were examined in their relationships to clinically-rated negative symptoms and social functioning. Natural speech samples were collected and analyzed for 14 patients with clinically-rated flat affect, 46 patients without flat affect and 19 healthy controls. The computer-based inflection and speech rate measures significantly discriminated patients with flat affect from controls, and the computer-based measure of alogia and negative emotion significantly discriminated the flat and non-flat patients. Both the computer and clinical measures of positive emotion/anhedonia corresponded to functioning impairments. The computerized method of assessing negative symptoms offered a number of advantages over the symptom scale-based approach. PMID:17920078
NASA Technical Reports Server (NTRS)
Lax, F. M.
1975-01-01
A time-controlled navigation system applicable to the descent phase of flight for airline transport aircraft was developed and simulated. The design incorporates the linear discrete-time sampled-data version of the linearized continuous-time system describing the aircraft's aerodynamics. Using optimal linear quadratic control techniques, an optimal deterministic control regulator which is implementable on an airborne computer is designed. The navigation controller assists the pilot in complying with assigned times of arrival along a four-dimensional flight path in the presence of wind disturbances. The strategic air traffic control concept is also described, followed by the design of a strategic control descent path. A strategy for determining possible times of arrival at specified waypoints along the descent path and for generating the corresponding route-time profiles that are within the performance capabilities of the aircraft is presented. Using a mathematical model of the Boeing 707-320B aircraft along with a Boeing 707 cockpit simulator interfaced with an Adage AGT-30 digital computer, a real-time simulation of the complete aircraft aerodynamics was achieved. The strategic four-dimensional navigation controller for longitudinal dynamics was tested on the nonlinear aircraft model in the presence of 15, 30, and 45 knot head-winds. The results indicate that the controller preserved the desired accuracy and precision of a time-controlled aircraft navigation system.
Suspended-sediment and nutrient loads for Waiakea and Alenaio Streams, Hilo, Hawaii, 2003-2006
Presley, Todd K.; Jamison, Marcael T.J.; Nishimoto, Dale C.
2008-01-01
Suspended sediment and nutrient samples were collected during wet-weather conditions at three sites on two ephemeral streams in the vicinity of Hilo, Hawaii during March 2004 to March 2006. Two sites were sampled on Waiakea Stream at 80- and 860-foot altitudes during March 2004 to August 2005. One site was sampled on Alenaio Stream at 10-foot altitude during November 2005 to March 2006. The sites were selected to represent different land uses and land covers in the area. Most of the drainage area above the upper Waiakea Stream site is conservation land. The drainage areas above the lower site on Waiakea Stream, and the site on Alenaio Stream, are a combination of conservation land, agriculture, rural, and urban land uses. In addition to the sampling, continuous-record streamflow sites were established at the three sampling sites, as well as an additional site on Alenaio Stream at altitude of 75 feet and 0.47 miles upstream from the sampling site. Stage was measured continuously at 15-minute intervals at these sites. Discharge, for any particular instant, or for selected periods of time, were computed based on a stage-discharge relation determined from individual discharge measurements. Continuous records of discharge were computed at the two sites on Waiakea Stream and the upper site on Aleniao Stream. Due to non-ideal hydraulic conditions within the channel of Alenaio Stream, a continuous record of discharge was not computed at the lower site on Alenaio Stream where samples were taken. Samples were analyzed for suspended sediment, and the nutrients total nitrogen, dissolved nitrite plus nitrate, and total phosphorus. Concentration data were converted to instantaneous load values: loads are the product of discharge and concentration, and are presented as tons per day for suspended sediment or pounds per day for nutrients. Daily-mean loads were computed by estimating concentrations relative to discharge using graphical constituent loading analysis techniques. Daily-mean loads were computed at the two Waiakea Stream sampling sites for the analyzed constituents, during the period October 1, 2003 to September 30, 2005. No record of daily-mean load was computed for the Alenaio Stream sampling site due to the problems with computing a discharge record. The maximum daily-mean loads for the upper site on Waiakea Stream for suspended sediment was 79 tons per day, and the maximum daily-mean loads for total nitrogen, dissolved nitrite plus nitrate, and total phosphorus were 1,350, 13, and 300 pounds per day, respectively. The maximum daily-mean loads for the lower site on Waiakea Stream for suspended sediment was 468 tons per day, and the maximum daily-mean loads for total nitrogen, nitrite plus nitrate, and total phosphorus were 913, 8.5, and 176 pounds per day, respectively. From the estimated continuous daily-mean load record, all of the maximum daily-mean loads occurred during October 2003 and September 2004, except for suspended sediment load for the lower site, which occurred on September 15, 2005. Maximum values were not all caused by a single storm event. Overall, the record of daily-mean loads showed lower loads during storm events for suspended sediments and nutrients at the downstream site of Waiakea Stream during 2004 than at the upstream site. During 2005, however, the suspended sediment loads were higher at the downstream site than the upstream site. Construction of a flood control channel between the two sites in 2005 may have contributed to the change in relative suspended-sediment loads.
Internet addiction: definition, assessment, epidemiology and clinical management.
Shaw, Martha; Black, Donald W
2008-01-01
Internet addiction is characterized by excessive or poorly controlled preoccupations, urges or behaviours regarding computer use and internet access that lead to impairment or distress. The condition has attracted increasing attention in the popular media and among researchers, and this attention has paralleled the growth in computer (and Internet) access. Prevalence estimates vary widely, although a recent random telephone survey of the general US population reported an estimate of 0.3-0.7%. The disorder occurs worldwide, but mainly in countries where computer access and technology are widespread. Clinical samples and a majority of relevant surveys report a male preponderance. Onset is reported to occur in the late 20s or early 30s age group, and there is often a lag of a decade or more from initial to problematic computer usage. Internet addiction has been associated with dimensionally measured depression and indicators of social isolation. Psychiatric co-morbidity is common, particularly mood, anxiety, impulse control and substance use disorders. Aetiology is unknown, but probably involves psychological, neurobiological and cultural factors. There are no evidence-based treatments for internet addiction. Cognitive behavioural approaches may be helpful. There is no proven role for psychotropic medication. Marital and family therapy may help in selected cases, and online self-help books and tapes are available. Lastly, a self-imposed ban on computer use and Internet access may be necessary in some cases.
Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián
2009-01-01
This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975
NASA Astrophysics Data System (ADS)
Fiorini, Paolo
1987-10-01
Sensor based, computer controlled end effectors for mechanical arms are receiving more and more attention in the robotics industry, because commonly available grippers are only adequate for simple pick and place tasks. This paper describes the current status of the research at JPL on a smart hand for a Puma 560 robot arm. The hand is a self contained, autonomous system, capable of executing high level commands from a supervisory computer. The mechanism consists of parallel fingers, powered by a DC motor, and controlled by a microprocessor embedded in the hand housing. Special sensors are integrated in the hand for measuring the grasp force of the fingers, and for measuring forces and torques applied between the arm and the surrounding environment. Fingers can be exercised under position, velocity and force control modes. The single-chip microcomputer in the hand executes the tasks of communication, data acquisition and sensor based motor control, with a sample cycle of 2 ms and a transmission rate of 9600 baud. The smart hand described in this paper represents a new development in the area of end effector design because of its multi-functionality and autonomy. It will also be a versatile test bed for experimenting with advanced control schemes for dexterous manipulation.
Custers, Kathleen; Van den Bulck, Jan
2010-04-01
To examine whether television viewing, computer game playing or book reading during meals predicts meal skipping with the aim of watching television, playing computer games or reading books (media meal skipping). A cross-sectional study was conducted using a standardized self-administered questionnaire. Analyses were controlled for age, gender and BMI. Data were obtained from a random sample of adolescents in Flanders, Belgium. Seven hundred and ten participants aged 12, 14 and 16 years. Of the participants, 11.8 % skipped meals to watch television, 10.5 % skipped meals to play computer games and 8.2 % skipped meals to read books. Compared with those who did not use these media during meals, the risk of skipping meals in order to watch television was significantly higher for those children who watched television during meals (2.9 times higher in those who watched television during at least one meal a day). The risk of skipping meals for computer game playing was 9.5 times higher in those who played computer games weekly or more while eating, and the risk of meal skipping in order to read books was 22.9 times higher in those who read books during meals less than weekly. The more meals the respondents ate with the entire family, the less likely they were to skip meals to watch television. The use of media during meals predicts meal skipping for using that same medium. Family meals appear to be inversely related to meal skipping for television viewing.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
Neuromorphic learning of continuous-valued mappings from noise-corrupted data
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1991-01-01
The effect of noise on the learning performance of the backpropagation algorithm is analyzed. A selective sampling of the training set is proposed to maximize the learning of control laws by backpropagation, when the data have been corrupted by noise. The training scheme is applied to the nonlinear control of a cart-pole system in the presence of noise. The neural computation provides the neurocontroller with good noise-filtering properties. In the presence of plant noise, the neurocontroller is found to be more stable than the teacher. A novel perspective on the application of neural network technology to control engineering is presented.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Minimum-fuel, 3-dimensional flightpath guidance of transfer jets
NASA Technical Reports Server (NTRS)
Neuman, F.; Kreindler, E.
1984-01-01
Minimum fuel, three dimensional flightpaths for commercial jet aircraft are discussed. The theoretical development is divided into two sections. In both sections, the necessary conditions of optimal control, including singular arcs and state constraints, are used. One section treats the initial and final portions (below 10,000 ft) of long optimal flightpaths. Here all possible paths can be derived by generating fields of extremals. Another section treats the complete intermediate length, three dimensional terminal area flightpaths. Here only representative sample flightpaths can be computed. Sufficient detail is provided to give the student of optimal control a complex example of a useful application of optimal control theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
2017-10-21
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Computer Generated Holography with Intensity-Graded Patterns
Conti, Rossella; Assayag, Osnath; de Sars, Vincent; Guillon, Marc; Emiliani, Valentina
2016-01-01
Computer Generated Holography achieves patterned illumination at the sample plane through phase modulation of the laser beam at the objective back aperture. This is obtained by using liquid crystal-based spatial light modulators (LC-SLMs), which modulate the spatial phase of the incident laser beam. A variety of algorithms is employed to calculate the phase modulation masks addressed to the LC-SLM. These algorithms range from simple gratings-and-lenses to generate multiple diffraction-limited spots, to iterative Fourier-transform algorithms capable of generating arbitrary illumination shapes perfectly tailored on the base of the target contour. Applications for holographic light patterning include multi-trap optical tweezers, patterned voltage imaging and optical control of neuronal excitation using uncaging or optogenetics. These past implementations of computer generated holography used binary input profile to generate binary light distribution at the sample plane. Here we demonstrate that using graded input sources, enables generating intensity graded light patterns and extend the range of application of holographic light illumination. At first, we use intensity-graded holograms to compensate for LC-SLM position dependent diffraction efficiency or sample fluorescence inhomogeneity. Finally we show that intensity-graded holography can be used to equalize photo evoked currents from cells expressing different levels of chanelrhodopsin2 (ChR2), one of the most commonly used optogenetics light gated channels, taking into account the non-linear dependence of channel opening on incident light. PMID:27799896
Student Engagement in a Computer Rich Science Classroom
NASA Astrophysics Data System (ADS)
Hunter, Jeffrey C.
The purpose of this study was to examine the student lived experience when using computers in a rural science classroom. The overarching question the project sought to examine was: How do rural students relate to computers as a learning tool in comparison to a traditional science classroom? Participant data were collected using a pre-study survey, Experience Sampling during class and post-study interviews. Students want to use computers in their classrooms. Students shared that they overwhelmingly (75%) preferred a computer rich classroom to a traditional classroom (25%). Students reported a higher level of engagement in classes that use technology/computers (83%) versus those that do not use computers (17%). A computer rich classroom increased student control and motivation as reflected by a participant who shared; "by using computers I was more motivated to get the work done" (Maggie, April 25, 2014, survey). The researcher explored a rural school environment. Rural populations represent a large number of students and appear to be underrepresented in current research. The participants, tenth grade Biology students, were sampled in a traditional teacher led class without computers for one week followed by a week using computers daily. Data supported that there is a new gap that separates students, a device divide. This divide separates those who have access to devices that are robust enough to do high level class work from those who do not. Although cellular phones have reduced the number of students who cannot access the Internet, they may have created a false feeling that access to a computer is no longer necessary at home. As this study shows, although most students have Internet access, fewer have access to a device that enables them to complete rigorous class work at home. Participants received little or no training at school in proper, safe use of a computer and the Internet. It is clear that the majorities of students are self-taught or receive guidance from peers resulting in lower self-confidence or the development of misconceptions of their skill or ability.
NASA Astrophysics Data System (ADS)
Ellison, Sara L.; Catinella, Barbara; Cortese, Luca
2018-05-01
We present a detailed assessment of the global atomic hydrogen gas fraction (fgas=log[MHI/M⋆]) in a sample of post-merger galaxies identified in the Sloan Digital Sky Survey (SDSS). Archival H I measurements of 47 targets are combined with new Arecibo observations of a further 51 galaxies. The stellar mass range of the post-merger sample, our observing strategy, detection thresholds and data analysis procedures replicate those of the extended GALEX Arecibo SDSS Survey (xGASS) which can therefore be used as a control sample. Our principal results are: 1) The post-merger sample shows a ˜ 50 per cent higher H I detection fraction compared with xGASS; 2) Accounting for non-detections, the median atomic gas fraction of the post-merger sample is larger than the control sample by 0.3 - 0.6 dex; 3) The median atomic gas fraction enhancement (Δfgas), computed on a galaxy-by-galaxy basis at fixed stellar mass, is 0.51 dex. Our results demonstrate that recently merged galaxies are typically a factor of ˜ 3 more H I rich than control galaxies of the same M⋆. If the control sample is additionally matched in star formation rate, the median H I excess is reduced to Δfgas = 0.2 dex, showing that the enhanced atomic gas fractions in post-mergers are not purely a reflection of changes in star formation activity. We conclude that merger-induced starbursts and outflows do not lead to prompt quenching via exhaustion/expulsion of the galactic gas reservoirs. Instead, we propose that if star formation ceases after a merger, it is more likely due to an enhanced turbulence which renders the galaxy unable to effectively form new stars.
Automated Proposition Density Analysis for Discourse in Aphasia.
Fromm, Davida; Greenhouse, Joel; Hou, Kaiyue; Russell, G Austin; Cai, Xizhen; Forbes, Margaret; Holland, Audrey; MacWhinney, Brian
2016-10-01
This study evaluates how proposition density can differentiate between persons with aphasia (PWA) and individuals in a control group, as well as among subtypes of aphasia, on the basis of procedural discourse and personal narratives collected from large samples of participants. Participants were 195 PWA and 168 individuals in a control group from the AphasiaBank database. PWA represented 6 aphasia types on the basis of the Western Aphasia Battery-Revised (Kertesz, 2006). Narrative samples were stroke stories for PWA and illness or injury stories for individuals in the control group. Procedural samples were from the peanut-butter-and-jelly-sandwich task. Language samples were transcribed using Codes for the Human Analysis of Transcripts (MacWhinney, 2000) and analyzed using Computerized Language Analysis (MacWhinney, 2000), which automatically computes proposition density (PD) using rules developed for automatic PD measurement by the Computerized Propositional Idea Density Rater program (Brown, Snodgrass, & Covington, 2007; Covington, 2007). Participants in the control group scored significantly higher than PWA on both tasks. PD scores were significantly different among the aphasia types for both tasks. Pairwise comparisons for both discourse tasks revealed that PD scores for the Broca's group were significantly lower than those for all groups except Transcortical Motor. No significant quadratic or linear association between PD and severity was found. Proposition density is differentially sensitive to aphasia type and most clearly differentiates individuals with Broca's aphasia from the other groups.
de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos
2018-07-01
The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.
78 FR 53237 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... control secondary computers (FCSCs), rather than flight control primary computers (FCPCs). This document... control primary computers (FCPCs); modifying two flight control secondary computers (FCSCs); revising the... the AD, which specify FCSCs, instead of flight control primary computers FCPCs. No other part of the...
ERIC Educational Resources Information Center
Jere-Folotiya, Jacqueline; Chansa-Kabali, Tamara; Munachaka, Jonathan C.; Sampa, Francis; Yalukanda, Christopher; Westerholm, Jari; Richardson, Ulla; Serpell, Robert; Lyytinen, Heikki
2014-01-01
This intervention study was conducted to document conditions under which a computer based literacy game (GraphoGame™) could enhance literacy skills of first grade students in an African city. The participants were first grade students from Government schools (N = 573). These students were randomly sampled into control (N = 314) and various…
Design and calibration of a vacuum compatible scanning tunneling microscope
NASA Technical Reports Server (NTRS)
Abel, Phillip B.
1990-01-01
A vacuum compatible scanning tunneling microscope was designed and built, capable of imaging solid surfaces with atomic resolution. The single piezoelectric tube design is compact, and makes use of sample mounting stubs standard to a commercially available surface analysis system. Image collection and display is computer controlled, allowing storage of images for further analysis. Calibration results from atomic scale images are presented.
Quantum Computing Using Superconducting Qubits
2006-04-01
see the right fig.), and (iii) dynamically modifying ( pulsating ) this potential by controlling the motion of the A particles. This allows easy...superconductors with periodic pinning arrays. We show that sample heating by moving vortices produces negative differential resistivity (NDR) of both N- and S...efficient (i.e., using one two-bit operation) QC circuits using modern microfabrication techniques. scheme for this design [1,3] to achieve conditional
ERIC Educational Resources Information Center
Van der Kooy-Hofland, Verna A. C.; Bus, Adriana G.; Roskos, Kathleen
2012-01-01
Living Letters is an adaptive game designed to promote children's combining of how the proper name sounds with their knowledge of how the name looks. A randomized controlled trial (RCT) was used to experimentally test whether priming for attending to the sound-symbol relationship in the proper name can reduce the risk for developing reading…
Statistical evaluation of vibration analysis techniques
NASA Technical Reports Server (NTRS)
Milner, G. Martin; Miller, Patrice S.
1987-01-01
An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.
Optimizing the multicycle subrotational internal cooling of diatomic molecules
NASA Astrophysics Data System (ADS)
Aroch, A.; Kallush, S.; Kosloff, R.
2018-05-01
Subrotational cooling of the AlH+ ion to the miliKelvin regime, using optimally shaped pulses, is computed. The coherent electromagnetic fields induce purity-conserved transformations and do not change the sample temperature. A decrease in a sample temperature, manifested by an increase of purity, is achieved by the complementary uncontrolled spontaneous emission which changes the entropy of the system. We employ optimal control theory to find a pulse that stirs the system into a population configuration that will result in cooling, upon multicycle excitation-emission steps. The obtained optimal transformation was shown capable to cool molecular ions to the subkelvins regime.
Computer Graphics Simulations of Sampling Distributions.
ERIC Educational Resources Information Center
Gordon, Florence S.; Gordon, Sheldon P.
1989-01-01
Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…
Khandaker, Morshed; Riahinezhad, Shahram; Williams, Wendy R.; Wolf, Roman
2017-01-01
The effect of depositing a collagen (CG)-poly-ε-caprolactone (PCL) nanofiber mesh (NFM) at the microgrooves of titanium (Ti) on the mechanical stability and osseointegration of the implant with bone was investigated using a rabbit model. Three groups of Ti samples were produced: control Ti samples where there were no microgrooves or CG-PCL NFM, groove Ti samples where microgrooves were machined on the circumference of Ti, and groove-NFM Ti samples where CG-PCL NFM was deposited on the machined microgrooves. Each group of Ti samples was implanted in the rabbit femurs for eight weeks. The mechanical stability of the Ti/bone samples were quantified by shear strength from a pullout tension test. Implant osseointegration was evaluated by a histomorphometric analysis of the percentage of bone and connective tissue contact with the implant surface. The bone density around the Ti was measured by micro–computed tomography (μCT) analysis. This study found that the shear strength of groove-NFM Ti/bone samples was significantly higher compared to control and groove Ti/bone samples (p < 0.05) and NFM coating influenced the bone density around Ti samples. In vivo histomorphometric analyses show that bone growth into the Ti surface increased by filling the microgrooves with CG-PCL NFM. The study concludes that a microgroove assisted CG-PCL NFM coating may benefit orthopedic implants. PMID:28608839
Antenna analysis using neural networks
NASA Technical Reports Server (NTRS)
Smith, William T.
1992-01-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.
Antenna analysis using neural networks
NASA Astrophysics Data System (ADS)
Smith, William T.
1992-09-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).
Material identification based upon energy-dependent attenuation of neutrons
Marleau, Peter
2015-10-06
Various technologies pertaining to identifying a material in a sample and imaging the sample are described herein. The material is identified by computing energy-dependent attenuation of neutrons that is caused by presence of the sample in travel paths of the neutrons. A mono-energetic neutron generator emits the neutron, which is downscattered in energy by a first detector unit. The neutron exits the first detector unit and is detected by a second detector unit subsequent to passing through the sample. Energy-dependent attenuation of neutrons passing through the sample is computed based upon a computed energy of the neutron, wherein such energy can be computed based upon 1) known positions of the neutron generator, the first detector unit, and the second detector unit; or 2) computed time of flight of neutrons between the first detector unit and the second detector unit.
The Process of Developing a Multi-Cell KEMS Instrument
NASA Technical Reports Server (NTRS)
Copland, E. H.; Auping, J. V.; Jacobson, N. S.
2012-01-01
Multi-cell KEMS offers many advantages over single cell instruments in regard to in-situ temperature calibration and studies on high temperature alloys and oxides of interest to NASA. The instrument at NASA Glenn is a 90 deg magnetic sector instrument originally designed for single cell operation. The conversion of this instrument to a multi-cell instrument with restricted collimation is discussed. For restricted collimation, the 'field aperture' is in the copper plate separating the Knudsen Cell region and the ionizer and the 'source aperture' is adjacent to the ionizer box. A computer controlled x-y table allows positioning of one of the three cells into the sampling region. Heating is accomplished via a Ta sheet element and temperature is measured via an automatic pyrometer from the bottom of the cells. The computer control and data system have been custom developed for this instrument and are discussed. Future improvements are also discussed.
NASA Technical Reports Server (NTRS)
Tibbetts, J. G.
1979-01-01
Methods for predicting noise at any point on an aircraft while the aircraft is in a cruise flight regime are presented. Developed for use in laminar flow control (LFC) noise effects analyses, they can be used in any case where aircraft generated noise needs to be evaluated at a location on an aircraft while under high altitude, high speed conditions. For each noise source applicable to the LFC problem, a noise computational procedure is given in algorithm format, suitable for computerization. Three categories of noise sources are covered: (1) propulsion system, (2) airframe, and (3) LFC suction system. In addition, procedures are given for noise modifications due to source soundproofing and the shielding effects of the aircraft structure wherever needed. Sample cases, for each of the individual noise source procedures, are provided to familiarize the user with typical input and computed data.
Magneto-optic superlattice thin films: Fabrication, structural and magnetic characterization
NASA Technical Reports Server (NTRS)
Falco, C. M.; Engel, B. N.; Vanleeuwen, R. A.; Yu, J.
1993-01-01
During this quarter studies were extended to determine the electronic contribution to the perpendicular interface anisotropy in Co-based multilayers. Using in situ Kerr effect measurements, the influences of different transition metals (TM = Ag, Au, Cu, and Pd) on the magnetic properties of single-crystal Co films grown on Pd (111) and Au (111) surfaces are investigated. Last quarter the discovery of a large peak in the perpendicular anisotropy when approximately one monolayer of Cu or Ag is deposited on the Co surface was reported. We now have added a computer-controlled stepper-motor drive to our MBE sample transfer mechanism. The motor allows us to move the sample at a constant velocity from behind a shutter during deposition. The film, therefore, is deposited as a wedge with a linear variation of thickness across the substrate. In this way, a continuous range of coverage on a single sample is studied. The stepper motor also provides the necessary control for precisely positioning the sample in the laser beam for Kerr effect measurements at the different coverages.
Automated sample exchange and tracking system for neutron research at cryogenic temperatures
NASA Astrophysics Data System (ADS)
Rix, J. E.; Weber, J. K. R.; Santodonato, L. J.; Hill, B.; Walker, L. M.; McPherson, R.; Wenzel, J.; Hammons, S. E.; Hodges, J.; Rennich, M.; Volin, K. J.
2007-01-01
An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW™ program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of ˜25K, after several minutes, it was moved onto a "landing pad" at ˜10K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2min. The time to cool the sample from ambient temperature to ˜10K was approximately 7min including precooling time. The cooling time increases to approximately 12min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to ˜350K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to ˜10K after it has been heated to ˜240K was approximately 20min.
The effect of a computer-related ergonomic intervention program on learners in a school environment.
Sellschop, Ingrid; Myezwa, Hellen; Mudzi, Witness; Mbambo-Kekana, Nonceba
2015-01-01
The interest in school ergonomic intervention programs and their effects on musculoskeletal pain is increasing around the world. The objective of this longitudinal randomized control trial was to implement and measure the effects of a computer-related ergonomics intervention on grade eight learners in a school environment in Johannesburg South Africa (a developing country). The sample comprised of a control group (n= 66) and an intervention group (n= 61). The outcome measures used were posture assessment using the Rapid Upper Limb Assessment tool (RULA) and the prevalence of musculoskeletal pain using a visual analogue scale (VAS). Measurements were done at baseline, three months and six months post intervention. The results showed that the posture of the intervention group changed significantly from an Action Level 4 to an Action level 2 and Action level 3, indicating a sustained improvement of learners' postural positions whilst using computers. The intervention group showed a significant reduction in the prevalence of musculoskeletal pain from 42.6% at baseline to 18% six months post intervention (p< 0.003). In conclusion, the results indicated that a computer-related intervention program for grade eight learners in a school environment is effective and that behavioural changes can be made that are sustainable over a period of six months.
From atomistic interfaces to dendritic patterns
NASA Astrophysics Data System (ADS)
Galenko, P. K.; Alexandrov, D. V.
2018-01-01
Transport processes around phase interfaces, together with thermodynamic properties and kinetic phenomena, control the formation of dendritic patterns. Using the thermodynamic and kinetic data of phase interfaces obtained on the atomic scale, one can analyse the formation of a single dendrite and the growth of a dendritic ensemble. This is the result of recent progress in theoretical methods and computational algorithms calculated using powerful computer clusters. Great benefits can be attained from the development of micro-, meso- and macro-levels of analysis when investigating the dynamics of interfaces, interpreting experimental data and designing the macrostructure of samples. The review and research articles in this theme issue cover the spectrum of scales (from nano- to macro-length scales) in order to exhibit recently developing trends in the theoretical analysis and computational modelling of dendrite pattern formation. Atomistic modelling, the flow effect on interface dynamics, the transition from diffusion-limited to thermally controlled growth existing at a considerable driving force, two-phase (mushy) layer formation, the growth of eutectic dendrites, the formation of a secondary dendritic network due to coalescence, computational methods, including boundary integral and phase-field methods, and experimental tests for theoretical models-all these themes are highlighted in the present issue. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.
Direct adaptive control of manipulators in Cartesian space
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.
1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.
Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi
2015-04-01
Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.
Ethylene monitoring and control system
NASA Technical Reports Server (NTRS)
Nelson, Bruce N. (Inventor); Kanc, James A. (Inventor); Richard, II, Roy V. (Inventor)
2000-01-01
A system that can accurately monitor and control low concentrations of ethylene gas includes a test chamber configured to receive sample gas potentially containing an ethylene concentration and ozone, a detector configured to receive light produced during a reaction between the ethylene and ozone and to produce signals related thereto, and a computer connected to the detector to process the signals to determine therefrom a value of the concentration of ethylene in the sample gas. The supply for the system can include a four way valve configured to receive pressurized gas at one input and a test chamber. A piston is journaled in the test chamber with a drive end disposed in a drive chamber and a reaction end defining with walls of the test chamber a variable volume reaction chamber. The drive end of the piston is pneumatically connected to two ports of the four way valve to provide motive force to the piston. A manifold is connected to the variable volume reaction chamber, and is configured to receive sample gasses from at least one of a plurality of ports connectable to degreening rooms and to supply the sample gas to the reactive chamber for reaction with ozone. The apparatus can be used to monitor and control the ethylene concentration in multiple degreening rooms.
Ethylene monitoring and control system
NASA Technical Reports Server (NTRS)
Nelson, Bruce N. (Inventor); Kane, James A. (Inventor); Richard, II, Roy V. (Inventor)
2001-01-01
A system that can accurately monitor and control low concentrations of ethylene gas includes a test chamber configured to receive sample gas potentially containing an ethylene concentration and ozone, a detector configured to receive light produced during a reaction between the ethylene and ozone and to produce signals related thereto, and a computer connected to the detector to process the signals to determine therefrom a value of the concentration of ethylene in the sample gas. The supply for the system can include a four way valve configured to receive pressurized gas at one input and a test chamber. A piston is journaled in the test chamber with a drive end disposed in a drive chamber and a reaction end defining with walls of the test chamber a variable volume reaction chamber. The drive end of the piston is pneumatically connected to two ports of the four way valve to provide motive force to the piston. A manifold is connected to the variable volume reaction chamber, and is configured to receive sample gasses from at least one of a plurality of ports connectable to degreening rooms and to supply the sample gas to the reactive chamber for reaction with ozone. The apparatus can be used to monitor and control the ethylene concentration in multiple degreening rooms.
Digital implementation of the TF30-P-3 turbofan engine control
NASA Technical Reports Server (NTRS)
Cwynar, D. S.; Batterton, P. G.
1975-01-01
The standard hydromechanical control modes for TF30-P-3 engine were implemented on a digital process control computer. Programming methods are described, and a method is presented to solve stability problems associated with fast response dynamic loops contained within the exhaust nozzle control. A modification of the exhaust nozzle control to provide for either velocity or position servoactuation systems is discussed. Transient response of the digital control was evaluated by tests on a real time hybrid simulation of the TF30-P-3 engine. It is shown that the deadtime produced by the calculation time delay between sampling and final output is more significant to transient response than the effects associated with sampling rate alone. For the main fuel control, extended update and calculation times resulted in a lengthened transient response to throttle bursts from idle to intermediate with an increase in high pressure compressor stall margin. Extremely long update intervals of 250 msec could be achieved without instability. Update extension for the exhaust nozzle control resulted in a delayed response of the afterburner light-off detector and exhaust nozzle overshoot with resulting fan oversuppression. Long update times of 150 msec caused failure of the control due to a false indication by the blowout detector.
Contrasting lexical similarity and formal definitions in SNOMED CT: consistency and implications.
Agrawal, Ankur; Elhanan, Gai
2014-02-01
To quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method. Utilizing SCT's Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples' sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional. For the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples. While, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution. Copyright © 2013 Elsevier Inc. All rights reserved.
Malta, Cristiana P; Damasceno, Naiana Nl; Ribeiro, Rosangela A; Silva, Carolina Sf; Devito, Karina L
2016-12-01
The aim of this study was to evaluate the contamination rate of intra and extraoral digital X ray equipment in a dental radiology clinic at a public educational institution. Samples were collected on three different days, at two times in the day: in the morning, before attending patients, and at the end of the day, after appointment hours and before cleaning and disinfection procedures. Samples were collected from the periapical X-ray machine (tube head, positioning device, control panel and activator button), the panoramic X- ray machine (temporal support, bite block, control panel and activator button), the intraoral digital system (sensor), and the digital system computers (keyboard and mouse). The samples were seeded in different culture media, incubated, and colony forming units (CFU/mL) counted. Biochemical tests were performed for suspected colonies of Staphylococcus, Streptococcus and Gramnegative bacilli (GNB). Fungi were visually differentiated into filamentous fungi and yeasts. The results indicated the growth of fungi and Staphylococcus fromall sampling locations. GNB growth was observed from all sites sampled from the intraoral X-ray equipment. On the panoramic unit, GNB growth was observed in samples from activator button, keyboard and mouse. In general, a higher number of CFU/mL was present before use. It can be concluded that more stringent protocols are needed to control infection and prevent X-ray exams from acting as vehicle for cross contamination. Sociedad Argentina de Investigación Odontológica.
Suboptimal LQR-based spacecraft full motion control: Theory and experimentation
NASA Astrophysics Data System (ADS)
Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.
2016-05-01
This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.
Kolmodin MacDonell, Karen; Naar, Sylvie; Gibson-Scipio, Wanda; Lam, Phebe; Secord, Elizabeth
2016-10-01
To conduct a randomized controlled pilot of a multicomponent, technology-based intervention promoting adherence to controller medication in African-American emerging adults with asthma. The intervention consisted of two computer-delivered sessions based on motivational interviewing combined with text messaged reminders between sessions. Participants (N = 49) were 18-29 years old, African-American, with persistent asthma requiring controller medication. Participants had to report poor medication adherence and asthma control. Youth were randomized to receive the intervention or an attention control. Data were collected through computer-delivered self-report questionnaires at baseline, 1, and 3 months. Ecological Momentary Assessment via two-way text messaging was also used to collect "real-time" data on medication use and asthma control. The intervention was feasible and acceptable to the target population, as evidenced by high retention rates and satisfaction scores. Changes in study outcomes from pre- to postintervention favored the intervention, particularly for decrease in asthma symptoms, t (42) = 2.22, p < .05 (Cohen's d = .071). Results suggest that the intervention is feasible and effective. However, findings are preliminary and should be replicated with a larger sample and more sophisticated data analyses. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Robust and real-time control of magnetic bearings for space engines
NASA Technical Reports Server (NTRS)
Sinha, Alok; Wang, Kon-Well; Mease, K.; Lewis, S.
1991-01-01
Currently, NASA Lewis Research Center is developing magnetic bearings for Space Shuttle Main Engine (SSME) turbopumps. The control algorithms which have been used are based on either the proportional-intergral-derivative control (PID) approach or the linear quadratic (LQ) state space approach. These approaches lead to an acceptable performance only when the system model is accurately known, which is seldom true in practice. For example, the rotor eccentricity, which is a major source of vibration at high speeds, cannot be predicted accurately. Furthermore, the dynamics of a rotor shaft, which must be treated as a flexible system to model the elastic rotor shaft, is infinite dimensional in theory and the controller can only be developed on the basis of a finite number of modes. Therefore, the development of the control system is further complicated by the possibility of closed loop system instability because of residual or uncontrolled modes, the so called spillover problem. Consequently, novel control algorithms for magnetic bearings are being developed to be robust to inevitable parametric uncertainties, external disturbances, spillover phenomenon and noise. Also, as pointed out earlier, magnetic bearings must exhibit good performance at a speed over 30,000 rpm. This implies that the sampling period available for the design of a digital control system has to be of the order of 0.5 milli-seconds. Therefore, feedback coefficients and other required controller parameters have to be computed off-line so that the on-line computational burden is extremely small. The development of the robust and real-time control algorithms is based on the sliding mode control theory. In this method, a dynamic system is made to move along a manifold of sliding hyperplanes to the origin of the state space. The number of sliding hyperplanes equals that of actuators. The sliding mode controller has two parts; linear state feedback and nonlinear terms. The nonlinear terms guarantee that the systems would reach the intersection of all sliding hyperplanes and remain on it when bounds on the errors in the system parameters and external disturbances are known. The linear part of the control drives the system to the origin of state space. Another important feature is that the controller parameter can be computed off-line. Consequently, on-line computational burden is small.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
Gradient-free MCMC methods for dynamic causal modelling
Sengupta, Biswa; Friston, Karl J.; Penny, Will D.
2015-03-14
Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less
Characterization of real-time computers
NASA Technical Reports Server (NTRS)
Shin, K. G.; Krishna, C. M.
1984-01-01
A real-time system consists of a computer controller and controlled processes. Despite the synergistic relationship between these two components, they have been traditionally designed and analyzed independently of and separately from each other; namely, computer controllers by computer scientists/engineers and controlled processes by control scientists. As a remedy for this problem, in this report real-time computers are characterized by performance measures based on computer controller response time that are: (1) congruent to the real-time applications, (2) able to offer an objective comparison of rival computer systems, and (3) experimentally measurable/determinable. These measures, unlike others, provide the real-time computer controller with a natural link to controlled processes. In order to demonstrate their utility and power, these measures are first determined for example controlled processes on the basis of control performance functionals. They are then used for two important real-time multiprocessor design applications - the number-power tradeoff and fault-masking and synchronization.
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
Schmitt, Michael
2004-09-01
We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.
NASA Astrophysics Data System (ADS)
Mabilangan, Arvin I.; Lopez, Lorenzo P.; Faustino, Maria Angela B.; Muldera, Joselito E.; Cabello, Neil Irvin F.; Estacio, Elmer S.; Salvador, Arnel A.; Somintac, Armando S.
2016-12-01
Porosity dependent terahertz emission of porous silicon (PSi) was studied. The PSi samples were fabricated via electrochemical etching of boron-doped (100) silicon in a solution containing 48% hydrofluoric acid, deionized water and absolute ethanol in a 1:3:4 volumetric ratio. The porosity was controlled by varying the supplied anodic current for each sample. The samples were then optically characterized via normal incidence reflectance spectroscopy to obtain values for their respective refractive indices and porosities. Absorbance of each sample was also computed using the data from its respective reflectance spectrum. Terahertz emission of each sample was acquired through terahertz - time domain spectroscopy. A decreasing trend in the THz signal power was observed as the porosity of each PSi was increased. This was caused by the decrease in the absorption strength as the silicon crystallite size in the PSi was minimized.
Optoelectronic system of online measurements of unburned carbon in coal fly ash
NASA Astrophysics Data System (ADS)
Golas, Janusz; Jankowski, Henryk; Niewczas, Bogdan; Piechna, Janusz; Skiba, Antoni; Szkutnik, Wojciech; Szkutnik, Zdzislaw P.; Wartak, Ryszarda; Worek, Cezary
2001-08-01
Carbon-in-ash level is an important consideration for combustion efficiency as well as ash marketing. The optoelectronic analyzing system for on-line determination and monitoring of the u burned carbon content of ash samples is presented. The apparatus operates on the principle that carbon content is proportional to the reflectance of IR light. Ash samples are collected iso kinetically from the flue gas duct and placed in a sample tube with a flat glass bottom. The same is then exposed to a light. The reflectance intensity is used by the system's computer to determine residual carbon content from correlation curves. The sample is then air purged back to the duct or to the attached sample canister to enable laboratory check analysis. The total cycle time takes between 5 and 10 minutes. Real time result of carbon content with accuracy 0.3-0.7 percent are reported and can be used for boiler controlling.
The Study of Indicatrices of Space Object Coatings in a Controlled Laboratory Environment
NASA Astrophysics Data System (ADS)
Koshkin, N.; Burlak, N.; Petrov, M.; Strakhova, S.
The indicatrices of light scattering by radiation balance coatings used on space objects (SO) were determined in the laboratory experiment in a controlled condition. The laboratory device for the physical simulation of photometric observations of space objects in orbit, which was used in this case to study optical properties of coating samples, is described. The features of light reflection off plane coating samples, including multi-layer insulation (MLI) blankets, metal surfaces coated with several layers of enamel EP-140, special polyacrylate enamel AK-512 and matte finish Tp-CO-2, were determined. The indicated coatings are compound reflectors which exhibit both diffuse and specular reflections. The data obtained are to be used in the development of computer optical-geometric models of space objects or their fragments (space debris) to interpret the photometry results for real space objects.
[Cardiovascular circulation feedback control treatment instrument].
Ge, Yu-zhi; Zhu, Xing-huan; Sheng, Guo-tai; Cao, Ping-liang; Liu, Dong-sheng; Wu, Zhi-ting
2005-07-01
The cardiovascular circulation feedback control treatment instrument (CFCTI) is an automatic feedback control treatment system, which has the function of monitoring, alarming, trouble self-diagnosis and testing on the line in the closed loop. The instrument is designed based on the successful clinical experiences and the data are inputted into the computer in real-time through a pressure sensor and A/D card. User interface window is set up for the doctor's choosing different medicine. The orders are outputted to control the dose of medicine through the transfusion system. The response to medicine is updated continually. CFCTI can avoid the man-made errors and the long interval of sampling. Its reliability and accuracy in rescuing the critical patients are much higher than the traditional methods.
ARM Airborne Continuous carbon dioxide measurements
Biraud, Sebastien
2013-03-26
The heart of the AOS CO2 Airborne Rack Mounted Analyzer System is the AOS Manifold. The AOS Manifold is a nickel coated aluminum analyzer and gas processor designed around two identical nickel-plated gas cells, one for reference gas and one for sample gas. The sample and reference cells are uniquely designed to provide optimal flushing efficiency. These cells are situated between a black-body radiation source and a photo-diode detection system. The AOS manifold also houses flow meters, pressure sensors and control valves. The exhaust from the analyzer flows into a buffer volume which allows for precise pressure control of the analyzer. The final piece of the analyzer is the demodulator board which is used to convert the DC signal generated by the analyzer into an AC response. The resulting output from the demodulator board is an averaged count of CO2 over a specified hertz cycle reported in volts and a corresponding temperature reading. The system computer is responsible for the input of commands and therefore works to control the unit functions such as flow rate, pressure, and valve control.The remainder of the system consists of compressors, reference gases, air drier, electrical cables, and the necessary connecting plumbing to provide a dry sample air stream and reference air streams to the AOS manifold.
Laboratory Measurements of Single-Particle Polarimetric Spectrum
NASA Astrophysics Data System (ADS)
Gritsevich, M.; Penttila, A.; Maconi, G.; Kassamakov, I.; Helander, P.; Puranen, T.; Salmi, A.; Hæggström, E.; Muinonen, K.
2017-12-01
Measuring scattering properties of different targets is important for material characterization, remote sensing applications, and for verifying theoretical results. Furthermore, there are usually simplifications made when we model targets and compute the scattering properties, e.g., ideal shape or constant optical parameters throughout the target material. Experimental studies help in understanding the link between the observed properties and computed results. Experimentally derived Mueller matrices of studied particles can be used as input for larger-scale scattering simulations, e.g., radiative transfer computations. This method allows to bypass the problem of using an idealized model for single-particle optical properties. While existing approaches offer ensemble- and orientation-averaged particle properties, our aim is to measure individual particles with controlled or known orientation. With the newly developed scatterometer, we aim to offer novel possibility to measure single, small (down to μm-scale) targets and their polarimetric spectra. This work presents an experimental setup that measures light scattered by a fixed small particle with dimensions ranging between micrometer and millimeter sizes. The goal of our setup is nondestructive characterization of such particles by measuring light of multiple wavelengths scattered in 360° in a horizontal plane by an ultrasonically levitating sample, whilst simultaneously controlling its 3D position and orientation. We describe the principles and design of our instrument and its calibration. We also present example measurements of real samples. This study was conducted under the support from the European Research Council, in the frame of the Advanced Grant project No. 320773 `Scattering and Absorption of Electromagnetic Waves in Particulate Media' (SAEMPL).
Construction of a General Purpose Command Language for Use in Computer Dialog.
1980-09-01
Page 1 Skeletal Command Action File...............35 2 Sample from Cyber Action File.................36 3 Program MONITOR Structure Chart...return indicates subroutine call and no return Fig 3. Program MONITOR Structure Chart 48 IV. Validation The general purpose command language was...executive control of these functions, in C addition to its role as interpreter. C C The structure , concept, design, and implementation of program C
LASER BIOLOGY: Optomechanical tests of hydrated biological tissues subjected to laser shaping
NASA Astrophysics Data System (ADS)
Omel'chenko, A. I.; Sobol', E. N.
2008-03-01
The mechanical properties of a matrix are studied upon changing the size and shape of biological tissues during dehydration caused by weak laser-induced heating. The cartilage deformation, dehydration dynamics, and hydraulic conductivity are measured upon laser heating. The hydrated state and the shape of samples of separated fascias and cartilaginous tissues were controlled by using computer-aided processing of tissue images in polarised light.
Bullen, A; Patel, S S; Saggau, P
1997-07-01
The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging.
Bullen, A; Patel, S S; Saggau, P
1997-01-01
The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging. Images FIGURE 6 PMID:9199810
Regression relation for pure quantum states and its implications for efficient computing.
Elsayed, Tarek A; Fine, Boris V
2013-02-15
We obtain a modified version of the Onsager regression relation for the expectation values of quantum-mechanical operators in pure quantum states of isolated many-body quantum systems. We use the insights gained from this relation to show that high-temperature time correlation functions in many-body quantum systems can be controllably computed without complete diagonalization of the Hamiltonians, using instead the direct integration of the Schrödinger equation for randomly sampled pure states. This method is also applicable to quantum quenches and other situations describable by time-dependent many-body Hamiltonians. The method implies exponential reduction of the computer memory requirement in comparison with the complete diagonalization. We illustrate the method by numerically computing infinite-temperature correlation functions for translationally invariant Heisenberg chains of up to 29 spins 1/2. Thereby, we also test the spin diffusion hypothesis and find it in a satisfactory agreement with the numerical results. Both the derivation of the modified regression relation and the justification of the computational method are based on the notion of quantum typicality.
NASA Astrophysics Data System (ADS)
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Garcia-Martin, Juan Antonio; Bayegan, Amir H; Dotu, Ivan; Clote, Peter
2016-10-19
RNA inverse folding is the problem of finding one or more sequences that fold into a user-specified target structure s 0 , i.e. whose minimum free energy secondary structure is identical to the target s 0 . Here we consider the ensemble of all RNA sequences that have low free energy with respect to a given target s 0 . We introduce the program RNAdualPF, which computes the dual partition function Z ∗ , defined as the sum of Boltzmann factors exp(-E(a,s 0 )/RT) of all RNA nucleotide sequences a compatible with target structure s 0 . Using RNAdualPF, we efficiently sample RNA sequences that approximately fold into s 0 , where additionally the user can specify IUPAC sequence constraints at certain positions, and whether to include dangles (energy terms for stacked, single-stranded nucleotides). Moreover, since we also compute the dual partition function Z ∗ (k) over all sequences having GC-content k, the user can require that all sampled sequences have a precise, specified GC-content. Using Z ∗ , we compute the dual expected energy 〈E ∗ 〉, and use it to show that natural RNAs from the Rfam 12.0 database have higher minimum free energy than expected, thus suggesting that functional RNAs are under evolutionary pressure to be only marginally thermodynamically stable. We show that C. elegans precursor microRNA (pre-miRNA) is significantly non-robust with respect to mutations, by comparing the robustness of each wild type pre-miRNA sequence with 2000 [resp. 500] sequences of the same GC-content generated by RNAdualPF, which approximately [resp. exactly] fold into the wild type target structure. We confirm and strengthen earlier findings that precursor microRNAs and bacterial small noncoding RNAs display plasticity, a measure of structural diversity. We describe RNAdualPF, which rapidly computes the dual partition function Z ∗ and samples sequences having low energy with respect to a target structure, allowing sequence constraints and specified GC-content. Using different inverse folding software, another group had earlier shown that pre-miRNA is mutationally robust, even controlling for compositional bias. Our opposite conclusion suggests a cautionary note that computationally based insights into molecular evolution may heavily depend on the software used. C/C++-software for RNAdualPF is available at http://bioinformatics.bc.edu/clotelab/RNAdualPF .
NASA Astrophysics Data System (ADS)
Yang, Kun-Yuan; Heh, Jia-Sheng
2007-10-01
The purpose of this study was to investigate and compare the impact of Internet Virtual Physics Laboratory (IVPL) instruction with traditional laboratory instruction in physics academic achievement, performance of science process skills, and computer attitudes of tenth grade students. One-hundred and fifty students from four classes at one private senior high school in Taoyuan Country, Taiwan, R.O.C. were sampled. All four classes contained 75 students who were equally divided into an experimental group and a control group. The pre-test results indicated that the students' entry-level physics academic achievement, science process skills, and computer attitudes were equal for both groups. On the post-test, the experimental group achieved significantly higher mean scores in physics academic achievement and science process skills. There was no significant difference in computer attitudes between the groups. We concluded that the IVPL had potential to help tenth graders improve their physics academic achievement and science process skills.
Visible light scatter measurements of the Advanced X-ray Astronomical Facility /AXAF/ mirror samples
NASA Technical Reports Server (NTRS)
Griner, D. B.
1981-01-01
NASA is studying the properties of mirror surfaces for X-ray telescopes, the data of which will be used to develop the telescope system for the Advanced X-ray Astronomical Facility. Visible light scatter measurements, using a computer controlled scanner, are made of various mirror samples to determine surface roughness. Total diffuse scatter is calculated using numerical integration techniques and used to estimate the rms surface roughness. The data measurements are then compared with X-ray scatter measurements of the same samples. A summary of the data generated is presented, along with graphs showing changes in scatter on samples before and after cleaning. Results show that very smooth surfaces can be polished on the common substrate materials (from 2 to 10 Angstroms), and nickel appears to give the lowest visible light scatter.
A review of evaluative studies of computer-based learning in nursing education.
Lewis, M J; Davies, R; Jenkins, D; Tait, M I
2001-01-01
Although there have been numerous attempts to evaluate the learning benefits of computer-based learning (CBL) packages in nursing education, the results obtained have been equivocal. A literature search conducted for this review found 25 reports of the evaluation of nursing CBL packages since 1966. Detailed analysis of the evaluation methods used in these reports revealed that most had significant design flaws, including the use of too small a sample group, the lack of a control group, etc. Because of this, the conclusions reached were not always valid. More effort is required in the design of future evaluation studies of nursing CBL packages. Copyright 2001 Harcourt Publishers Ltd.
Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document
NASA Technical Reports Server (NTRS)
Taylor, B. N.; Loscutoff, A. V.
1972-01-01
Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.
Computer programs for generation and evaluation of near-optimum vertical flight profiles
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Waters, M. H.; Patmore, L. C.
1983-01-01
Two extensive computer programs were developed. The first, called OPTIM, generates a reference near-optimum vertical profile, and it contains control options so that the effects of various flight constraints on cost performance can be examined. The second, called TRAGEN, is used to simulate an aircraft flying along an optimum or any other vertical reference profile. TRAGEN is used to verify OPTIM's output, examine the effects of uncertainty in the values of parameters (such as prevailing wind) which govern the optimum profile, or compare the cost performance of profiles generated by different techniques. A general description of these programs, the efforts to add special features to them, and sample results of their usage are presented.
Development of Portable, Wireless and Smartphone Controllable Near-Infrared Spectroscopy System.
Watanabe, Takashi; Sekine, Rui; Mizuno, Toshihiko; Miwa, Mitsuharu
We have developed portable near-infrared tissue oxygenation monitoring systems, called the "PocketNIRS Duo" and the "PocketNIRS HM", which features wireless data communication and a sampling rate of up to 60 data readings per second. The systems can be controlled by smartphone or personal computer. We demonstrate the efficacy of the systems for monitoring changes in brain and arm muscle hemodynamics and oxygenation in breath-holding and cuff-occlusion tests, respectively.Our systems should prove to be useful as an oxygenation monitor not only in research but also in healthcare applications.
Users manual for the Variable dimension Automatic Synthesis Program (VASP)
NASA Technical Reports Server (NTRS)
White, J. S.; Lee, H. Q.
1971-01-01
A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets.
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Knowledge-based control for robot self-localization
NASA Technical Reports Server (NTRS)
Bennett, Bonnie Kathleen Holte
1993-01-01
Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.
An FTIR point sensor for identifying chemical WMD and hazardous materials
NASA Astrophysics Data System (ADS)
Norman, Mark L.; Gagnon, Aaron M.; Reffner, John A.; Schiering, David W.; Allen, Jeffrey D.
2004-03-01
A new point sensor for identifying chemical weapons of mass destruction and other hazardous materials based on Fourier transform infrared (FT-IR) spectroscopy is presented. The sensor is a portable, fully functional FT-IR system that features a miniaturized Michelson interferometer, an integrated diamond attenuated total reflection (ATR) sample interface, and an embedded on-board computer. Samples are identified by an automated search algorithm that compares their infrared spectra to digitized databases that include reference spectra of nerve and blister agents, toxic industrial chemicals, and other hazardous materials. The hardware and software are designed for use by technicians with no background in infrared spectroscopy. The unit, which is fully self-contained, can be hand-carried and used in a hot zone by personnel in Level A protective gear, and subsequently decontaminated by spraying or immersion. Wireless control by a remote computer is also possible. Details of the system design and performance, including results of field validation tests, are discussed.
NASA Technical Reports Server (NTRS)
Sherman, W. L.
1975-01-01
The effects of steady wind, turbulence, data sample rate, and control-actuator natural frequency on the response of a possible automatic landing system were investigated in a nonstatistical study. The results indicate that the system, which interfaces with the microwave landing system, functions well in winds and turbulence as long as the guidance law contains proper compensation for wind. The system response was satisfactory down to five data samples per second, which makes the system compatible with the microwave landing system. No adverse effects were observed when actuator natural frequency was lowered. For limiting cases, those cases where the roll angle goes to zero just as the airplane touches down, the basic method for computing the turn-algorithm gains proved unsatisfactory and unacceptable landings resulted. Revised computation methods gave turn-algorithm gains that resulted in acceptable landings. The gains provided by the new method also improved the touchdown conditions for acceptable landings over those obtained when the gains were determined by the old method.
Gradient-free MCMC methods for dynamic causal modelling.
Sengupta, Biswa; Friston, Karl J; Penny, Will D
2015-05-15
In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
The Role of Parents and Related Factors on Adolescent Computer Use
Epstein, Jennifer A.
2012-01-01
Background Research suggested the importance of parents on their adolescents’ computer activity. Spending too much time on the computer for recreational purposes in particular has been found to be related to areas of public health concern in children/adolescents, including obesity and substance use. Design and Methods The goal of the research was to determine the association between recreational computer use and potentially linked factors (parental monitoring, social influences to use computers including parents, age of first computer use, self-control, and particular internet activities). Participants (aged 13-17 years and residing in the United States) were recruited via the Internet to complete an anonymous survey online using a survey tool. The target sample of 200 participants who completed the survey was achieved. The sample’s average age was 16 and was 63% girls. Results A set of regressions with recreational computer use as dependent variables were run. Conclusions Less parental monitoring, younger age at first computer use, listening or downloading music from the internet more frequently, using the internet for educational purposes less frequently, and parent’s use of the computer for pleasure were related to spending a greater percentage of time on non-school computer use. These findings suggest the importance of parental monitoring and parental computer use on their children’s own computer use, and the influence of some internet activities on adolescent computer use. Finally, programs aimed at parents to help them increase the age when their children start using computers and learn how to place limits on recreational computer use are needed. PMID:25170449
NASA Astrophysics Data System (ADS)
Sagnotti, Leonardo
2013-04-01
Modern rock magnetometers and stepwise demagnetization procedures result in the production of large datasets, which need a versatile and fast software for their display and analysis. Various software packages for paleomagnetic analyses have been recently developed to overcome the problems linked to the limited capability and the loss of operability of early codes written in obsolete computer languages and/or platforms, not compatible with modern 64 bit processors. The Demagnetization Analysis in Excel (DAIE) workbook is a new software that has been designed to make the analysis of demagnetization data easy and accessible on an application (Microsoft Excel) widely diffused and available on both the Microsoft Windows and Mac OS X operating systems. The widespread diffusion of Excel should guarantee a long term working life, since compatibility and functionality of current Excel files should be most likely maintained during the development of new processors and operating systems. DAIE is designed for viewing and analyzing stepwise demagnetization data of both discrete and u-channel samples. DAIE consists of a single file and has an open modular structure organized in 10 distinct worksheets. The standard demagnetization diagrams and various parameters of common use are shown on the same worksheet including selectable parameters and user's choices. The remanence characteristic components may be computed by principal component analysis (PCA) on a selected interval of demagnetization steps. Saving of the PCA data can be done both sample by sample, or in automatic by applying the selected choices to all the samples included in the file. The DAIE open structure allows easy personalization, development and improvement. The workbook has the following features which may be valuable for various users: - Operability in nearly all the computers and platforms; - Easy inputs of demagnetization data by "copy and paste" from ASCII files; - Easy export of computed parameters and demagnetization plots; - Complete control of the whole workflow and possibility of implementation of the workbook by any user; - Modular structure in distinct worksheets for each type of analyses and plots, in order to make implementation and personalization easier; - Opportunity to use the workbook for educational purposes, since all the computations and analyses are easily traceable and accessible; - Automatic and fast analysis of a large batch of demagnetization data, such as those measured on u-channel samples. The DAIE workbook and the "User manual" are available for download on a dedicated web site (http://roma2.rm.ingv.it/en/facilities/software/49/daie).
Effect size calculation in meta-analyses of psychotherapy outcome research.
Hoyt, William T; Del Re, A C
2018-05-01
Meta-analysis of psychotherapy intervention research normally examines differences between treatment groups and some form of comparison group (e.g., wait list control; alternative treatment group). The effect of treatment is normally quantified as a standardized mean difference (SMD). We describe procedures for computing unbiased estimates of the population SMD from sample data (e.g., group Ms and SDs), and provide guidance about a number of complications that may arise related to effect size computation. These complications include (a) incomplete data in research reports; (b) use of baseline data in computing SMDs and estimating the population standard deviation (σ); (c) combining effect size data from studies using different research designs; and (d) appropriate techniques for analysis of data from studies providing multiple estimates of the effect of interest (i.e., dependent effect sizes). Clinical or Methodological Significance of this article: Meta-analysis is a set of techniques for producing valid summaries of existing research. The initial computational step for meta-analyses of research on intervention outcomes involves computing an effect size quantifying the change attributable to the intervention. We discuss common issues in the computation of effect sizes and provide recommended procedures to address them.
Cosart, Ted; Beja-Pereira, Albano; Luikart, Gordon
2014-11-01
The computer program EXONSAMPLER automates the sampling of thousands of exon sequences from publicly available reference genome sequences and gene annotation databases. It was designed to provide exon sequences for the efficient, next-generation gene sequencing method called exon capture. The exon sequences can be sampled by a list of gene name abbreviations (e.g. IFNG, TLR1), or by sampling exons from genes spaced evenly across chromosomes. It provides a list of genomic coordinates (a bed file), as well as a set of sequences in fasta format. User-adjustable parameters for collecting exon sequences include a minimum and maximum acceptable exon length, maximum number of exonic base pairs (bp) to sample per gene, and maximum total bp for the entire collection. It allows for partial sampling of very large exons. It can preferentially sample upstream (5 prime) exons, downstream (3 prime) exons, both external exons, or all internal exons. It is written in the Python programming language using its free libraries. We describe the use of EXONSAMPLER to collect exon sequences from the domestic cow (Bos taurus) genome for the design of an exon-capture microarray to sequence exons from related species, including the zebu cow and wild bison. We collected ~10% of the exome (~3 million bp), including 155 candidate genes, and ~16,000 exons evenly spaced genomewide. We prioritized the collection of 5 prime exons to facilitate discovery and genotyping of SNPs near upstream gene regulatory DNA sequences, which control gene expression and are often under natural selection. © 2014 John Wiley & Sons Ltd.
A study of pilot modeling in multi-controller tasks
NASA Technical Reports Server (NTRS)
Whitbeck, R. F.; Knight, J. R.
1972-01-01
A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.
Optimization and Control of Cyber-Physical Vehicle Systems
Bradley, Justin M.; Atkins, Ella M.
2015-01-01
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541
Optimization and Control of Cyber-Physical Vehicle Systems.
Bradley, Justin M; Atkins, Ella M
2015-09-11
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, M. G.
The suitability of several temperature measurement schemes for an irradiation creep experiment is examined. It is found that the specimen resistance can be used to measure and control the sample temperature if compensated for resistance drift due to radiation and annealing effects. A modified Kelvin bridge is presented that allows compensation for resistance drift by periodically checking the sample resistance at a controlled ambient temperature. A new phase-insensitive method for detecting the bridge error signals is presented. The phase-insensitive detector is formed by averaging the magnitude of two bridge voltages. Although this method is substantially less sensitive to stray reactancesmore » in the bridge than conventional phase-sensitive detectors, it is sensitive to gain stability and linearity of the rectifier circuits. Accuracy limitations of rectifier circuits are examined both theoretically and experimentally in great detail. Both hand analyses and computer simulations of rectifier errors are presented. Finally, the design of a temperature control system based on sample resistance measurement is presented. The prototype is shown to control a 316 stainless steel sample to within a 0.15/sup 0/C short term (10 sec) and a 0.03/sup 0/C long term (10 min) standard deviation at temperatures between 150 and 700/sup 0/C. The phase-insensitive detector typically contributes less than 10 ppM peak resistance measurement error (0.04/sup 0/C at 700/sup 0/C for 316 stainless steel or 0.005/sup 0/C at 150/sup 0/C for zirconium).« less
Implementation of Cloud based next generation sequencing data analysis in a clinical laboratory.
Onsongo, Getiria; Erdmann, Jesse; Spears, Michael D; Chilton, John; Beckman, Kenneth B; Hauge, Adam; Yohe, Sophia; Schomaker, Matthew; Bower, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat
2014-05-23
The introduction of next generation sequencing (NGS) has revolutionized molecular diagnostics, though several challenges remain limiting the widespread adoption of NGS testing into clinical practice. One such difficulty includes the development of a robust bioinformatics pipeline that can handle the volume of data generated by high-throughput sequencing in a cost-effective manner. Analysis of sequencing data typically requires a substantial level of computing power that is often cost-prohibitive to most clinical diagnostics laboratories. To address this challenge, our institution has developed a Galaxy-based data analysis pipeline which relies on a web-based, cloud-computing infrastructure to process NGS data and identify genetic variants. It provides additional flexibility, needed to control storage costs, resulting in a pipeline that is cost-effective on a per-sample basis. It does not require the usage of EBS disk to run a sample. We demonstrate the validation and feasibility of implementing this bioinformatics pipeline in a molecular diagnostics laboratory. Four samples were analyzed in duplicate pairs and showed 100% concordance in mutations identified. This pipeline is currently being used in the clinic and all identified pathogenic variants confirmed using Sanger sequencing further validating the software.
Effect of computer radiation on weight and oxidant-antioxidant status of mice.
Pei, Xuexian; Gu, Qijun; Ye, Dongdong; Wang, Yang; Zou, Xu; He, Lianping; Jin, Yuelong; Yao, Yingshui
2014-10-20
To explore the effects of computer radiation on weight and oxidant-antioxidant status of mice, and further to confirm that whether vitamin C has protective effects on computer radiation. Sixty Male adult ICR mice were randomly divided into six groups. each group give different treatment as follows: group A was control, group B given vitamin C intake, group C given 8 h/day computer radiation exposure, group D given vitamin C intake and 8 h/day computer radiation group E given 16 h/day computer radiation exposure, group F given vitamin C intake plus exposure to 16 h/day computer radiation. After seven weeks, mice was executed to collect the blood samples, for detecting total antioxidant capacity (T-AOC) and alkaline phosphatases (ALP)content in serum or liver tissue were determined by ELISA. No difference was found for the change of weight among six groups at different week. In the group C, D and F, the liver tissue T-AOC level were higher than the group A. In the group B, C and E, the serum ALP level were lower than the group A (P<0.05). The study indicate that computer radiation may have an adverse effect on T-AOC and ALP level of mice, and vitamin C have protective effect against computer radiation. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
2014-01-01
Background Recent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework. Results We present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4. Conclusions The results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/. PMID:25225611
A FORTRAN program for the analysis of linear continuous and sample-data systems
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1976-01-01
A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.
NASA Astrophysics Data System (ADS)
Ageev, E. V.; Altukhov, A. Yu; Malneva, Yu V.; Novikov, A. N.
2018-03-01
The results of the wear resistance investigation of electro sparking coatings, applied using electrode material from electroerosive powders of hard alloy VK-8 (90%) with the addition of powder of high-speed steel of grade R6M5 (10%), are presented. Electro spark coatings were formed on samples of 30KhGSA steel using these electrodes and installation UR-121. The coefficient of friction and the wear rate of the surface of the sample and counterbody were measured on an automated friction machine “Tribometer” (CSM Instruments, Switzerland), controlled by a computer, according to the standard “ball-disk” test scheme.
Fuzzy Adaptive Control Design and Discretization for a Class of Nonlinear Uncertain Systems.
Zhao, Xudong; Shi, Peng; Zheng, Xiaolong
2016-06-01
In this paper, tracking control problems are investigated for a class of uncertain nonlinear systems in lower triangular form. First, a state-feedback controller is designed by using adaptive backstepping technique and the universal approximation ability of fuzzy logic systems. During the design procedure, a developed method with less computation is proposed by constructing one maximum adaptive parameter. Furthermore, adaptive controllers with nonsymmetric dead-zone are also designed for the systems. Then, a sampled-data control scheme is presented to discretize the obtained continuous-time controller by using the forward Euler method. It is shown that both proposed continuous and discrete controllers can ensure that the system output tracks the target signal with a small bounded error and the other closed-loop signals remain bounded. Two simulation examples are presented to verify the effectiveness and applicability of the proposed new design techniques.
ERIC Educational Resources Information Center
Sinn, John W.
This instructional manual contains five learning activity packets for use in a workshop on computer numerical control for computer-aided manufacturing. The lessons cover the following topics: introduction to computer-aided manufacturing, understanding the lathe, using the computer, computer numerically controlled part programming, and executing a…
NASA Astrophysics Data System (ADS)
Reichow, M. K.; Brewer, T. S.; Marvin, L. G.; Lee, S. V.
2008-12-01
Little information presently exists on the heterogeneity of hydrothermal alteration in the oceanic crust or the variability of the associated thermal, fluid, and chemical fluxes. Formation porosities are important controls on these fluxes and porosity measurements are routinely collected during wireline logging operations. These estimates on the formation porosity are measures of the moderating power of the formation in response to bombardment by neutrons. The neutron absorption macroscopic cross-section (Σ = σρ) is a representation of the ability of the rock to slow down neutrons, and as such can be used to invert the porosity of a sample. Boron, lithium and other trace elements are important controls on σ-values, and the distribution of these is influenced by secondary low-temperature alteration processes. Consequently, computed σ-values may be used to discriminate between various basalt types and to identify areas of secondary alteration. Critical in this analysis is the degree of alteration, since elements such as B and Li can dramatically affect the sigma value and leading to erroneous porosity values. We analysed over 150 'pool-samples' for S, Li, Be and B element concentrations to estimate their contribution to the measured neutron porosity. These chemical analyses allow the calculation of the model sigma values for individual samples. Using a range of variably altered samples recovered during IODP Expeditions 309 and 312 we provide bulk estimates of alteration within the drilled section using the measured neutron porosity. B concentration in Hole 1256D increases with depth, with sharp rises at 959 and 1139 mbsf. Elevated wireline neutron porosities cannot always be directly linked with high B content. However, our preliminary results imply that increased neutron porosity (~15) at depths below 1100 mbsf may reflect hydrothermal alteration rather than formation porosity. This interpretation is supported when compared with generally lower computed porosity estimates derived from resistivity measurements for the same intervals.
Improving self-regulated learning junior high school students through computer-based learning
NASA Astrophysics Data System (ADS)
Nurjanah; Dahlan, J. A.
2018-05-01
This study is back grounded by the importance of self-regulated learning as an affective aspect that determines the success of students in learning mathematics. The purpose of this research is to see how the improvement of junior high school students' self-regulated learning through computer based learning is reviewed in whole and school level. This research used a quasi-experimental research method. This is because individual sample subjects are not randomly selected. The research design used is Pretest-and-Posttest Control Group Design. Subjects in this study were students of grade VIII junior high school in Bandung taken from high school (A) and middle school (B). The results of this study showed that the increase of the students' self-regulated learning who obtain learning with computer-based learning is higher than students who obtain conventional learning. School-level factors have a significant effect on increasing of the students' self-regulated learning.
Real time simulation of computer-assisted sequencing of terminal area operations
NASA Technical Reports Server (NTRS)
Dear, R. G.
1981-01-01
A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.
Domingo-Almenara, Xavier; Brezmes, Jesus; Vinaixa, Maria; Samino, Sara; Ramirez, Noelia; Ramon-Krauel, Marta; Lerin, Carles; Díaz, Marta; Ibáñez, Lourdes; Correig, Xavier; Perera-Lluna, Alexandre; Yanes, Oscar
2016-10-04
Gas chromatography coupled to mass spectrometry (GC/MS) has been a long-standing approach used for identifying small molecules due to the highly reproducible ionization process of electron impact ionization (EI). However, the use of GC-EI MS in untargeted metabolomics produces large and complex data sets characterized by coeluting compounds and extensive fragmentation of molecular ions caused by the hard electron ionization. In order to identify and extract quantitative information on metabolites across multiple biological samples, integrated computational workflows for data processing are needed. Here we introduce eRah, a free computational tool written in the open language R composed of five core functions: (i) noise filtering and baseline removal of GC/MS chromatograms, (ii) an innovative compound deconvolution process using multivariate analysis techniques based on compound match by local covariance (CMLC) and orthogonal signal deconvolution (OSD), (iii) alignment of mass spectra across samples, (iv) missing compound recovery, and (v) identification of metabolites by spectral library matching using publicly available mass spectra. eRah outputs a table with compound names, matching scores and the integrated area of compounds for each sample. The automated capabilities of eRah are demonstrated by the analysis of GC-time-of-flight (TOF) MS data from plasma samples of adolescents with hyperinsulinaemic androgen excess and healthy controls. The quantitative results of eRah are compared to centWave, the peak-picking algorithm implemented in the widely used XCMS package, MetAlign, and ChromaTOF software. Significantly dysregulated metabolites are further validated using pure standards and targeted analysis by GC-triple quadrupole (QqQ) MS, LC-QqQ, and NMR. eRah is freely available at http://CRAN.R-project.org/package=erah .
Nonhomogeneous results in place learning among panic disorder patients with agoraphobia.
Gorini, Alessandra; Schruers, Koen; Riva, Giuseppe; Griez, Eric
2010-10-30
Patients affected by panic disorder with agoraphobia (PDA) often suffer from visuo-spatial disturbances. In the present study, we tested the place-learning abilities in a sample of 31 PDA patients compared to 31 healthy controls (CTR) using the computer-generated arena (C-G Arena), a desktop-based computer program developed at the University of Arizona (Jacobs et al 1997, for further detail about the program, see http://web.arizona.edu/~arg/data.html). Subjects were asked to search the computer-generated space, over several trials, for the location of a hidden target. Results showed that control subjects rapidly learned to locate the invisible target and consistently returned to it, while PDA patients were divided in two subgroups: some of them (PDA-A) were as good as controls in place learning, while some others (PDA-B) were unable to learn the correct strategies to find the target. Further analyses revealed that PDA-A patients were significantly younger and affected by panic disorder from less time than PDA-B, indicating that age and duration of illness can be critical factors that influence the place-learning abilities. The existence of two different subgroups of PDA patients who differ in their spatial orientation abilities could provide new insight into the mechanisms of panic and open new perspectives in the cognitive-behavioral treatment of this diffuse and disabling disorder. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
A fast algorithm to compute precise type-2 centroids for real-time control applications.
Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R
2015-02-01
An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.
Random sphere packing model of heterogeneous propellants
NASA Astrophysics Data System (ADS)
Kochevets, Sergei Victorovich
It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.
COMSAC: Computational Methods for Stability and Control. Part 1
NASA Technical Reports Server (NTRS)
Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)
2004-01-01
Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?
Planning and processing multistage samples with a computer programMUST.
John W. Hazard; Larry E. Stewart
1974-01-01
A computer program was written to handle multistage sampling designs in insect populations. It is, however, general enough to be used for any population where the number of stages does not exceed three. The program handles three types of sampling situations, all of which assume equal probability sampling. Option 1 takes estimates of sample variances, costs, and either...
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen
2017-01-01
The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.
Ultralow-Power Digital Correlator for Microwave Polarimetry
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; Hass, K. Joseph
2004-01-01
A recently developed high-speed digital correlator is especially well suited for processing readings of a passive microwave polarimeter. This circuit computes the autocorrelations of, and the cross-correlations among, data in four digital input streams representing samples of in-phase (I) and quadrature (Q) components of two intermediate-frequency (IF) signals, denoted A and B, that are generated in heterodyne reception of two microwave signals. The IF signals arriving at the correlator input terminals have been digitized to three levels (-1,0,1) at a sampling rate up to 500 MHz. Two bits (representing sign and magnitude) are needed to represent the instantaneous datum in each input channel; hence, eight bits are needed to represent the four input signals during any given cycle of the sampling clock. The accumulation (integration) time for the correlation is programmable in increments of 2(exp 8) cycles of the sampling clock, up to a maximum of 2(exp 24) cycles. The basic functionality of the correlator is embodied in 16 correlation slices, each of which contains identical logic circuits and counters (see figure). The first stage of each correlation slice is a logic gate that computes one of the desired correlations (for example, the autocorrelation of the I component of A or the negative of the cross-correlation of the I component of A and the Q component of B). The sampling of the output of the logic gate output is controlled by the sampling-clock signal, and an 8-bit counter increments in every clock cycle when the logic gate generates output. The most significant bit of the 8-bit counter is sampled by a 16-bit counter with a clock signal at 2(exp 8) the frequency of the sampling clock. The 16-bit counter is incremented every time the 8-bit counter rolls over.
Kula, Katherine; Hale, Lindsay N; Ghoneima, Ahmed; Tholpady, Sunil; Starbuck, John M
2016-11-01
To compare maxillary mucosal thickening and sinus volumes of unilateral cleft lip and palate subjects (UCLP) with noncleft (nonCLP) controls. Randomized, retrospective study of cone-beam computed tomographs (CBCT). University. Fifteen UCLP subjects and 15 sex- and age-matched non-CLP controls, aged 8 to 14 years. Following institutional review board approval and reliability tests, Dolphin three-dimensional imaging software was used to segment and slice maxillary sinuses on randomly selected CBCTs. The surface area (SA) of bony sinus and airspace on all sinus slices was determined using Dolphin and multiplied by slice thickness (0.4 mm) to calculate volume. Mucosal thickening was the difference between bony sinus and airspace volumes. The number of slices with bony sinus and airspace outlines was totaled. Right and left sinus values for each group were pooled (t tests, P > .05; n = 30 each group). All measures were compared (principal components analysis, multivariate analysis of variance, analysis of variance) by group and age (P ≤ .016 was considered significant). Principal components analysis axis 1 and 2 explained 89.6% of sample variance. Principal components analysis showed complete separation based on the sample on axis 1 only. Age groups showed some separation on axis 2. Unilateral cleft lip and palate subjects had significantly smaller bony sinus and airspace volumes, fewer bony and airspace slices, and greater mucosal thickening and percentage mucosal thickening when compared with controls. Older subjects had significantly greater bony sinus and airspace volumes than younger subjects. Children with UCLP have significantly more maxillary sinus mucosal thickening and smaller sinuses than controls.
THREE-PEE SAMPLING THEORY and program 'THRP' for computer generation of selection criteria
L. R. Grosenbaugh
1965-01-01
Theory necessary for sampling with probability proportional to prediction ('three-pee,' or '3P,' sampling) is first developed and then exemplified by numerical comparisons of several estimators. Program 'T RP' for computer generation of appropriate 3P-sample-selection criteria is described, and convenient random integer dispensers are...
Light propagation in tissues with controlled optical properties
NASA Astrophysics Data System (ADS)
Tuchin, Valery V.; Maksimova, Irina L.; Zimnyakov, Dmitry A.; Kon, Irina L.; Mavlyutov, Albert H.; Mishin, Alexey A.
1997-10-01
Theoretical and computer modeling approaches, such as Mie theory, radiative transfer theory, diffusion wave correlation spectroscopy, and Monte Carlo simulation were used to analyze tissue optics during a process of optical clearing due to refractive index matching. Continuous wave transmittance and forward scattering measurement as well as intensity correlation experiments were used to monitor tissue structural and optical properties. As a control, tissue samples of the human sclera were taken. Osmotically active solutions, such as Trazograph, glucose, and polyethylene glycol, were used as chemicals. A characteristic time response of human scleral optical clearing the range 3 to 10 min was determined. The diffusion coefficients describing the permeability of the scleral samples to Trazograph were experimentally estimated; the average value was DT approximately equals (0.9 +/- 0.5) X 10-5 cm2/s. The results are general and can be used to describe many other fibrous tissues.
NASA Technical Reports Server (NTRS)
Wallace, J. W.; Lovelady, R. W.; Ferguson, R. L.
1981-01-01
A prototype water quality monitoring system is described which offers almost continuous in situ monitoring. The two-man portable system features: (1) a microprocessor controlled central processing unit which allows preprogrammed sampling schedules and reprogramming in situ; (2) a subsurface unit for multiple depth capability and security from vandalism; (3) an acoustic data link for communications between the subsurface unit and the surface control unit; (4) eight water quality parameter sensors; (5) a nonvolatile magnetic bubble memory which prevents data loss in the event of power interruption; (6) a rechargeable power supply sufficient for 2 weeks of unattended operation; (7) a water sampler which can collect samples for laboratory analysis; (8) data output in direct engineering units on printed tape or through a computer compatible link; (9) internal electronic calibration eliminating external sensor adjustment; and (10) acoustic location and recovery systems. Data obtained in Saginaw Bay, Lake Huron are tabulated.
A steering law for a roof-type configuration for a single-gimbal control moment gyro system
NASA Technical Reports Server (NTRS)
Yoshikawa, T.
1974-01-01
Single-Gimbal Control Moment Gyro (SGCMG) systems have been investigated for attitude control of the Large Space Telescope (LST) and the High Energy Astronomy Observatory (HEAO). However, various proposed steering laws for the SGCMG systems thus far have some defects because of singular states of the system. In this report, a steering law for a roof-type SGCMG system is proposed which is based on a new momentum distribution scheme that makes all the singular states unstable. This momentum distribution scheme is formulated by a treatment of the system as a sampled-data system. From analytical considerations, it is shown that this steering law gives control performance which is satisfactory for practical applications. Results of the preliminary computer simulation entirely support this premise.
Coherent control of photoelectron wavepacket angular interferograms
NASA Astrophysics Data System (ADS)
Hockett, P.; Wollenhaupt, M.; Baumert, T.
2015-11-01
Coherent control over photoelectron wavepackets, via the use of polarization-shaped laser pulses, can be understood as a time and polarization-multiplexed process, where the final (time-integrated) observable coherently samples all instantaneous states of the light-matter interaction. In this work, we investigate this multiplexing via computation of the observable photoelectron angular interferograms resulting from multi-photon atomic ionization with polarization-shaped laser pulses. We consider the polarization sensitivity of both the instantaneous and cumulative continuum wavefunction; the nature of the coherent control over the resultant photoelectron interferogram is thus explored in detail. Based on this understanding, the use of coherent control with polarization-shaped pulses as a methodology for a highly multiplexed coherent quantum metrology is also investigated, and defined in terms of the information content of the observable.
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
Hinsmann, P; Arce, L; Ríos, A; Valcárcel, M
2000-01-07
The separation of seven pesticides by micellar electrokinetic capillary chromatography in spiked water samples is described, allowing the analysis of pesticides mixtures down to a concentration of 50 microg l(-1) in less than 13 min. Calibration, pre-concentration, elution and injection into the sample vial was carried out automatically by a continuous flow system (CFS) coupled to a capillary electrophoresis system via a programmable arm. The whole system was electronically coupled by a micro-processor and completely controlled by a computer. A C18 solid-phase mini-column was used for the pre-concentration, allowing a 12-fold enrichment (as an average value) of the pesticides from fortified water samples. Under the optimal extraction conditions, recoveries between 90 and 114% for most of the pesticides were obtained.
Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture
Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz
2014-01-01
We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178
Experimental scattershot boson sampling
Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J.; Galvão, Ernesto F.; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio
2015-01-01
Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy. PMID:26601164
Experimental scattershot boson sampling.
Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J; Galvão, Ernesto F; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio
2015-04-01
Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy.
Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov
2015-08-01
Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.
Method and system for redundancy management of distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2012-01-01
A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.
NASA Astrophysics Data System (ADS)
Raghavan, V.; Whitney, Scott E.; Ebmeier, Ryan J.; Padhye, Nisha V.; Nelson, Michael; Viljoen, Hendrik J.; Gogos, George
2006-09-01
In this article, experimental and numerical analyses to investigate the thermal control of an innovative vortex tube based polymerase chain reaction (VT-PCR) thermocycler are described. VT-PCR is capable of rapid DNA amplification and real-time optical detection. The device rapidly cycles six 20μl 96bp λ-DNA samples between the PCR stages (denaturation, annealing, and elongation) for 30cycles in approximately 6min. Two-dimensional numerical simulations have been carried out using computational fluid dynamics (CFD) software FLUENT v.6.2.16. Experiments and CFD simulations have been carried out to measure/predict the temperature variation between the samples and within each sample. Heat transfer rate (primarily dictated by the temperature differences between the samples and the external air heating or cooling them) governs the temperature distribution between and within the samples. Temperature variation between and within the samples during the denaturation stage has been quite uniform (maximum variation around ±0.5 and 1.6°C, respectively). During cooling, by adjusting the cold release valves in the VT-PCR during some stage of cooling, the heat transfer rate has been controlled. Improved thermal control, which increases the efficiency of the PCR process, has been obtained both experimentally and numerically by slightly decreasing the rate of cooling. Thus, almost uniform temperature distribution between and within the samples (within 1°C) has been attained for the annealing stage as well. It is shown that the VT-PCR is a fully functional PCR machine capable of amplifying specific DNA target sequences in less time than conventional PCR devices.
Benner, W.H.
1984-05-08
An oxygen analyzer which identifies and classifies microgram quantities of oxygen in ambient particulate matter and for quantitating organic oxygen in solvent extracts of ambient particulate matter. A sample is pyrolyzed in oxygen-free nitrogen gas (N/sub 2/), and the resulting oxygen quantitatively converted to carbon monoxide (CO) by contact with hot granular carbon (C). Two analysis modes are made possible: (1) rapid determination of total pyrolyzable obtained by decomposing the sample at 1135/sup 0/C, or (2) temperature-programmed oxygen thermal analysis obtained by heating the sample from room temperature to 1135/sup 0/C as a function of time. The analyzer basically comprises a pyrolysis tube containing a bed of granular carbon under N/sub 2/, ovens used to heat the carbon and/or decompose the sample, and a non-dispersive infrared CO detector coupled to a mini-computer to quantitate oxygen in the decomposition products and control oven heating.
Benner, William H.
1986-01-01
An oxygen analyzer which identifies and classifies microgram quantities of oxygen in ambient particulate matter and for quantitating organic oxygen in solvent extracts of ambient particulate matter. A sample is pyrolyzed in oxygen-free nitrogen gas (N.sub.2), and the resulting oxygen quantitatively converted to carbon monoxide (CO) by contact with hot granular carbon (C). Two analysis modes are made possible: (1) rapid determination of total pyrolyzable oxygen obtained by decomposing the sample at 1135.degree. C., or (2) temperature-programmed oxygen thermal analysis obtained by heating the sample from room temperature to 1135.degree. C. as a function of time. The analyzer basically comprises a pyrolysis tube containing a bed of granular carbon under N.sub.2, ovens used to heat the carbon and/or decompose the sample, and a non-dispersive infrared CO detector coupled to a mini-computer to quantitate oxygen in the decomposition products and control oven heating.
Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1989-01-01
Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less
Gaffney, Hannah; Mansell, Warren; Edwards, Rachel; Wright, Jason
2014-11-01
Computerized self-help that has an interactive, conversational format holds several advantages, such as flexibility across presenting problems and ease of use. We designed a new program called MYLO that utilizes the principles of METHOD of Levels (MOL) therapy--based upon Perceptual Control Theory (PCT). We tested the efficacy of MYLO, tested whether the psychological change mechanisms described by PCT mediated its efficacy, and evaluated effects of client expectancy. Forty-eight student participants were randomly assigned to MYLO or a comparison program ELIZA. Participants discussed a problem they were currently experiencing with their assigned program and completed measures of distress, resolution and expectancy preintervention, postintervention and at 2-week follow-up. MYLO and ELIZA were associated with reductions in distress, depression, anxiety and stress. MYLO was considered more helpful and led to greater problem resolution. The psychological change processes predicted higher ratings of MYLO's helpfulness and reductions in distress. Positive expectancies towards computer-based problem solving correlated with MYLO's perceived helpfulness and greater problem resolution, and this was partly mediated by the psychological change processes identified. The findings provide provisional support for the acceptability of the MYLO program in a non-clinical sample although its efficacy as an innovative computer-based aid to problem solving remains unclear. Nevertheless, the findings provide tentative early support for the mechanisms of psychological change identified within PCT and highlight the importance of client expectations on predicting engagement in computer-based self-help.
Liu, Yushu; Ye, Hongqiang; Wang, Yong; Zhao, Yijao; Sun, Yuchun; Zhou, Yongsheng
2018-05-17
To evaluate the internal adaptations of cast crowns made from resin patterns produced using three different computer-aided design/computer-assisted manufacturing technologies. A full-crown abutment made of zirconia was digitized using an intraoral scanner, and the design of the crown was finished on the digital model. Resin patterns were fabricated using a fused deposition modeling (FDM) 3D printer (LT group), a digital light projection (DLP) 3D printer (EV group), or a five-axis milling machine (ZT group). All patterns were cast in cobalt-chromium alloy crowns. Crowns made from traditional handmade wax patterns (HM group) were used as controls. Each group contained 10 samples. The internal gaps of the patterns were analyzed using a 3D replica method and optical digitization. The results were compared using Kruskal-Wallis analysis of variance (ANOVA), a one-sample t test, and signed rank test (α = .05). For the LT group, the marginal and axial gaps were significantly larger than in the other three groups (P < .05), but the occlusal adaptation did not reveal a significant difference (P > .05). In the ZT group, the axial gap was slightly smaller than in the HM group (P < .0083). All the means of gaps in all areas in the four groups were less than 150 μm. Casting crowns using casting patterns made from all three CAD/CAM systems could not produce the prescribed parameters, but the crowns showed clinically acceptable internal adaptations.
LabVIEW-based control software for para-hydrogen induced polarization instrumentation.
Agraz, Jose; Grunfeld, Alexander; Li, Debiao; Cunningham, Karl; Willey, Cindy; Pozos, Robert; Wagner, Shawn
2014-04-01
The elucidation of cell metabolic mechanisms is the modern underpinning of the diagnosis, treatment, and in some cases the prevention of disease. Para-Hydrogen induced polarization (PHIP) enhances magnetic resonance imaging (MRI) signals over 10,000 fold, allowing for the MRI of cell metabolic mechanisms. This signal enhancement is the result of hyperpolarizing endogenous substances used as contrast agents during imaging. PHIP instrumentation hyperpolarizes Carbon-13 ((13)C) based substances using a process requiring control of a number of factors: chemical reaction timing, gas flow, monitoring of a static magnetic field (Bo), radio frequency (RF) irradiation timing, reaction temperature, and gas pressures. Current PHIP instruments manually control the hyperpolarization process resulting in the lack of the precise control of factors listed above, resulting in non-reproducible results. We discuss the design and implementation of a LabVIEW based computer program that automatically and precisely controls the delivery and manipulation of gases and samples, monitoring gas pressures, environmental temperature, and RF sample irradiation. We show that the automated control over the hyperpolarization process results in the hyperpolarization of hydroxyethylpropionate. The implementation of this software provides the fast prototyping of PHIP instrumentation for the evaluation of a myriad of (13)C based endogenous contrast agents used in molecular imaging.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
LabVIEW-based control software for para-hydrogen induced polarization instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agraz, Jose, E-mail: joseagraz@ucla.edu; Grunfeld, Alexander; Li, Debiao
2014-04-15
The elucidation of cell metabolic mechanisms is the modern underpinning of the diagnosis, treatment, and in some cases the prevention of disease. Para-Hydrogen induced polarization (PHIP) enhances magnetic resonance imaging (MRI) signals over 10 000 fold, allowing for the MRI of cell metabolic mechanisms. This signal enhancement is the result of hyperpolarizing endogenous substances used as contrast agents during imaging. PHIP instrumentation hyperpolarizes Carbon-13 ({sup 13}C) based substances using a process requiring control of a number of factors: chemical reaction timing, gas flow, monitoring of a static magnetic field (B{sub o}), radio frequency (RF) irradiation timing, reaction temperature, and gas pressures.more » Current PHIP instruments manually control the hyperpolarization process resulting in the lack of the precise control of factors listed above, resulting in non-reproducible results. We discuss the design and implementation of a LabVIEW based computer program that automatically and precisely controls the delivery and manipulation of gases and samples, monitoring gas pressures, environmental temperature, and RF sample irradiation. We show that the automated control over the hyperpolarization process results in the hyperpolarization of hydroxyethylpropionate. The implementation of this software provides the fast prototyping of PHIP instrumentation for the evaluation of a myriad of {sup 13}C based endogenous contrast agents used in molecular imaging.« less
An efficient sampling strategy for selection of biobank samples using risk scores.
Björk, Jonas; Malmqvist, Ebba; Rylander, Lars; Rignell-Hydbom, Anna
2017-07-01
The aim of this study was to suggest a new sample-selection strategy based on risk scores in case-control studies with biobank data. An ongoing Swedish case-control study on fetal exposure to endocrine disruptors and overweight in early childhood was used as the empirical example. Cases were defined as children with a body mass index (BMI) ⩾18 kg/m 2 ( n=545) at four years of age, and controls as children with a BMI of ⩽17 kg/m 2 ( n=4472 available). The risk of being overweight was modelled using logistic regression based on available covariates from the health examination and prior to selecting samples from the biobank. A risk score was estimated for each child and categorised as low (0-5%), medium (6-13%) or high (⩾14%) risk of being overweight. The final risk-score model, with smoking during pregnancy ( p=0.001), birth weight ( p<0.001), BMI of both parents ( p<0.001 for both), type of residence ( p=0.04) and economic situation ( p=0.12), yielded an area under the receiver operating characteristic curve of 67% ( n=3945 with complete data). The case group ( n=416) had the following risk-score profile: low (12%), medium (46%) and high risk (43%). Twice as many controls were selected from each risk group, with further matching on sex. Computer simulations showed that the proposed selection strategy with stratification on risk scores yielded consistent improvements in statistical precision. Using risk scores based on available survey or register data as a basis for sample selection may improve possibilities to study heterogeneity of exposure effects in biobank-based studies.
Computational fluid dynamics (CFD) studies of a miniaturized dissolution system.
Frenning, G; Ahnfelt, E; Sjögren, E; Lennernäs, H
2017-04-15
Dissolution testing is an important tool that has applications ranging from fundamental studies of drug-release mechanisms to quality control of the final product. The rate of release of the drug from the delivery system is known to be affected by hydrodynamics. In this study we used computational fluid dynamics to simulate and investigate the hydrodynamics in a novel miniaturized dissolution method for parenteral formulations. The dissolution method is based on a rotating disc system and uses a rotating sample reservoir which is separated from the remaining dissolution medium by a nylon screen. Sample reservoirs of two sizes were investigated (SR6 and SR8) and the hydrodynamic studies were performed at rotation rates of 100, 200 and 400rpm. The overall fluid flow was similar for all investigated cases, with a lateral upward spiraling motion and central downward motion in the form of a vortex to and through the screen. The simulations indicated that the exchange of dissolution medium between the sample reservoir and the remaining release medium was rapid for typical screens, for which almost complete mixing would be expected to occur within less than one minute at 400rpm. The local hydrodynamic conditions in the sample reservoirs depended on their size; SR8 appeared to be relatively more affected than SR6 by the resistance to liquid flow resulting from the screen. Copyright © 2017 Elsevier B.V. All rights reserved.
Richter, Randy R; Sebelski, Chris A; Austin, Tricia M
2016-09-01
The quality of abstract reporting in physical therapy literature is unknown. The purpose of this study was to provide baseline data for judging the future impact of the 2010 Consolidated Standards of Reporting Trials statement specifically referencing the 2008 Consolidated Standards of Reporting Trials statement for reporting of abstracts of randomized controlled trials across and between a broad sample and a core sample of physical therapy literature. A cross-sectional, bibliographic analysis was conducted. Abstracts of randomized controlled trials from 2009 were retrieved from PubMed, PEDro, and CENTRAL. Eligibility was determined using PEDro criteria. For outcomes measures, items from the Consolidated Standards of Reporting Trials statement for abstract reporting were used for assessment. Raters were not blinded to citation details. Using a computer-generated set of random numbers, 150 abstracts from 112 journals comprised the broad sample. A total of 53 abstracts comprised the core sample. Fourteen of 20 Consolidated Standards of Reporting Trials items for both samples were reported in less than 50% of the abstracts. Significantly more abstracts in the core sample reported (% difference core - broad; 95% confidence interval) title (28.4%; 12.9%-41.2%), blinding (15.2%; 1.6%-29.8%), setting (47.6%; 32.4%-59.4%), and confidence intervals (13.1%; 5.0%-25.1%). These findings provide baseline data for determining if continuing efforts to improve abstract reporting are heeded.
Potato Operation: automatic detection of potato diseases
NASA Astrophysics Data System (ADS)
Lefebvre, Marc; Zimmerman, Thierry; Baur, Charles; Guegerli, Paul; Pun, Thierry
1995-01-01
The Potato Operation is a collaborative, multidisciplinary project in the domain of destructive testing of agricultural products. It aims at automatizing pulp sampling of potatoes in order to detect possible viral diseases. Such viruses can decrease fields productivity by a factor of up to ten. A machine, composed of three conveyor belts, a vision system, a robotic arm and controlled by a PC has been built. Potatoes are brought one by one from a bulk to the vision system, where they are seized by a rotating holding device. The sprouts, where the viral activity is maximum, are then detected by an active vision process operating on multiple views. The 3D coordinates of the sampling point are communicated to the robot arm holding a drill. Some flesh is then sampled by the drill, then deposited into an Elisa plate. After sampling, the robot arm washes the drill in order to prevent any contamination. The PC computer simultaneously controls these processes, the conveying of the potatoes, the vision algorithms and the sampling procedure. The master process, that is the vision procedure, makes use of three methods to achieve the sprouts detection. A profile analysis first locates the sprouts as protuberances. Two frontal analyses, respectively based on fluorescence and local variance, confirm the previous detection and provide the 3D coordinate of the sampling zone. The other two processes work by interruption of the master process.
Marfeo, Elizabeth E; Ni, Pengsheng; Haley, Stephen M; Bogusz, Kara; Meterko, Mark; McDonough, Christine M; Chan, Leighton; Rasch, Elizabeth K; Brandt, Diane E; Jette, Alan M
2013-09-01
To use item response theory (IRT) data simulations to construct and perform initial psychometric testing of a newly developed instrument, the Social Security Administration Behavioral Health Function (SSA-BH) instrument, that aims to assess behavioral health functioning relevant to the context of work. Cross-sectional survey followed by IRT calibration data simulations. Community. Sample of individuals applying for Social Security Administration disability benefits: claimants (n=1015) and a normative comparative sample of U.S. adults (n=1000). None. SSA-BH measurement instrument. IRT analyses supported the unidimensionality of 4 SSA-BH scales: mood and emotions (35 items), self-efficacy (23 items), social interactions (6 items), and behavioral control (15 items). All SSA-BH scales demonstrated strong psychometric properties including reliability, accuracy, and breadth of coverage. High correlations of the simulated 5- or 10-item computer adaptive tests with the full item bank indicated robust ability of the computer adaptive testing approach to comprehensively characterize behavioral health function along 4 distinct dimensions. Initial testing and evaluation of the SSA-BH instrument demonstrated good accuracy, reliability, and content coverage along all 4 scales. Behavioral function profiles of Social Security Administration claimants were generated and compared with age- and sex-matched norms along 4 scales: mood and emotions, behavioral control, social interactions, and self-efficacy. Using the computer adaptive test-based approach offers the ability to collect standardized, comprehensive functional information about claimants in an efficient way, which may prove useful in the context of the Social Security Administration's work disability programs. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Kerai, Paresh; Wood, Pene; Martin, Mary
2014-03-01
Australia introduced its version of personal health records in July 2012. Success of the personally controlled electronic health record (PCEHR) relies on acceptance during the early stages. The main aim of this study was to investigate the views of a sample of elderly people in a non-metropolitan region in Australia on the PCEHR, and to assess their acceptance levels of this concept. A self-administered questionnaire was distributed to a non-probability convenience sample of respondents recruited from meetings of Probus, a community club for active business and professional retirees. Approximately three-quarters of the respondents had computer and Internet access at home. If not accessed at home a computer at a general practitioner's practice was seen as beneficial in accessing the PCEHR. Respondents felt that access to their health record would help them make decisions about their own health and improve their communication with healthcare providers. The majority of respondents were in favour of the PCEHR although some expressed concerns about the security of their PCEHR. There was mixed opinion surrounding the access by health professionals to an individual's PCEHR. This study has revealed important information about views of the PCEHR. While the respondents were generally in favour of the concept, there were still some concerns about the security of the PCEHR suggesting further reassurance may be required. The study also highlighted some measures, in particular provision of General Practitioner computer access points and print-out facilities that may need to be considered during these initial implementation stages in order to improve adoption rates once the technology is fully available. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Sensitive Spin Detection Using An On-Chip Squid-Waveguide Resonator
NASA Astrophysics Data System (ADS)
Yue, Guang
Quantum computing gives novel way of computing using quantum mechanics, which furthers human knowledge and has exciting applications. Quantum systems with diluted spins such as rare earth ions hosted in single crystal, molecule-based magnets etc. are promising qubits candidates to form the basis of a quantum computer. High sensitivity measurement and coherent control of these spin systems are crucial for their practical usage as qubits. The micro-SQUID (direct-current micrometer-sized Superconducting QUantum Interference Device) is capable to measure magnetization of spin system with high sensitivity. For example, the micro-SQUID technique can measure magnetic moments as small as several thousand muB as shown by the study of [W. Wernsdorfer, Supercond. Sci. Technol. 22, 064013 (2009)]. Here we develop a novel on-chip setup that combines the micro-SQUID sensitivity with microwave excitation. Such setup can be used for electron spin resonance measurements or coherent control of spins utilizing the high sensitivity of micro-SQUID for signal detection. To build the setup, we studied the fabrication process of the micro-SQUID, which is made of weak-linked Josephson junctions. The SQUID as a detector is integrated on the same chip with a shorted coplanar waveguide, so that the microwave pulses can be applied through the waveguide to excite the sample for resonance measurements. The whole device is plasma etched from a thin (˜ 20nm) niobium film, so that the SQUID can work at in large in-plane magnetic fields of several tesla. In addition, computer simulations are done to find the best design of the waveguide such that the microwave excitation field is sufficiently strong and uniformly applied to the sample. The magnetization curve of Mn12 molecule-based magnet sample is measured to prove the proper working of the micro-SQUID. Electron spin resonance measurement is done on the setup for gadolinium ions diluted in a CaWO4 single crystal. The measurement shows clear evidence of the resonance signal from the 1st transition of the gadolinium ions' energy levels, which shows the setup is successfully built. Due to the high sensitivity of micro-SQUID and the ability to concentrate microwave energy in small areas of the chip, this setup can detect signals from a small number of spins (107) in a small volume (several mum 3).
Finite-difference solution of the compressible stability eigenvalue problem
NASA Technical Reports Server (NTRS)
Malik, M. R.
1982-01-01
A compressible stability analysis computer code is developed. The code uses a matrix finite difference method for local eigenvalue solution when a good guess for the eigenvalue is available and is significantly more computationally efficient than the commonly used initial value approach. The local eigenvalue search procedure also results in eigenfunctions and, at little extra work, group velocities. A globally convergent eigenvalue procedure is also developed which may be used when no guess for the eigenvalue is available. The global problem is formulated in such a way that no unstable spurious modes appear so that the method is suitable for use in a black box stability code. Sample stability calculations are presented for the boundary layer profiles of a Laminar Flow Control (LFC) swept wing.
Airport-Noise Levels and Annoyance Model (ALAMO) user's guide
NASA Technical Reports Server (NTRS)
Deloach, R.; Donaldson, J. L.; Johnson, M. J.
1986-01-01
A guide for the use of the Airport-Noise Level and Annoyance MOdel (ALAMO) at the Langley Research Center computer complex is provided. This document is divided into 5 primary sections, the introduction, the purpose of the model, and an in-depth description of the following subsystems: baseline, noise reduction simulation and track analysis. For each subsystem, the user is provided with a description of architecture, an explanation of subsystem use, sample results, and a case runner's check list. It is assumed that the user is familiar with the operations at the Langley Research Center (LaRC) computer complex, the Network Operating System (NOS 1.4) and CYBER Control Language. Incorporated within the ALAMO model is a census database system called SITE II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
JACKSON VL
2011-08-31
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance atmore » full-scale.« less
Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1998-01-01
Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.
NASA Technical Reports Server (NTRS)
Natesh, R.; Smith, J. M.; Bruce, T.; Oidwai, H. A.
1980-01-01
One hundred and seventy four silicon sheet samples were analyzed for twin boundary density, dislocation pit density, and grain boundary length. Procedures were developed for the quantitative analysis of the twin boundary and dislocation pit densities using a QTM-720 Quantitative Image Analyzing system. The QTM-720 system was upgraded with the addition of a PDP 11/03 mini-computer with dual floppy disc drive, a digital equipment writer high speed printer, and a field-image feature interface module. Three versions of a computer program that controls the data acquisition and analysis on the QTM-720 were written. Procedures for the chemical polishing and etching were also developed.
Kirwan, Jennifer A; Weber, Ralf J M; Broadhurst, David I; Viant, Mark R
2014-01-01
Direct-infusion mass spectrometry (DIMS) metabolomics is an important approach for characterising molecular responses of organisms to disease, drugs and the environment. Increasingly large-scale metabolomics studies are being conducted, necessitating improvements in both bioanalytical and computational workflows to maintain data quality. This dataset represents a systematic evaluation of the reproducibility of a multi-batch DIMS metabolomics study of cardiac tissue extracts. It comprises of twenty biological samples (cow vs. sheep) that were analysed repeatedly, in 8 batches across 7 days, together with a concurrent set of quality control (QC) samples. Data are presented from each step of the workflow and are available in MetaboLights. The strength of the dataset is that intra- and inter-batch variation can be corrected using QC spectra and the quality of this correction assessed independently using the repeatedly-measured biological samples. Originally designed to test the efficacy of a batch-correction algorithm, it will enable others to evaluate novel data processing algorithms. Furthermore, this dataset serves as a benchmark for DIMS metabolomics, derived using best-practice workflows and rigorous quality assessment. PMID:25977770
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
NASA Astrophysics Data System (ADS)
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)
Time-Domain Terahertz Computed Axial Tomography NDE System
NASA Technical Reports Server (NTRS)
Zimdars, David
2012-01-01
NASA has identified the need for advanced non-destructive evaluation (NDE) methods to characterize aging and durability in aircraft materials to improve the safety of the nation's airline fleet. 3D THz tomography can play a major role in detection and characterization of flaws and degradation in aircraft materials, including Kevlar-based composites and Kevlar and Zylon fabric covers for soft-shell fan containment where aging and durability issues are critical. A prototype computed tomography (CT) time-domain (TD) THz imaging system has been used to generate 3D images of several test objects including a TUFI tile (a thermal protection system tile used on the Space Shuttle and possibly the Orion or similar capsules). This TUFI tile had simulated impact damage that was located and the depth of damage determined. The CT motion control gan try was designed and constructed, and then integrated with a T-Ray 4000 control unit and motion controller to create a complete CT TD-THz imaging system prototype. A data collection software script was developed that takes multiple z-axis slices in sequence and saves the data for batch processing. The data collection software was integrated with the ability to batch process the slice data with the CT TD-THz image reconstruction software. The time required to take a single CT slice was decreased from six minutes to approximately one minute by replacing the 320 ps, 100-Hz waveform acquisition system with an 80 ps, 1,000-Hz waveform acquisition system. The TD-THZ computed tomography system was built from pre-existing commercial off-the-shelf subsystems. A CT motion control gantry was constructed from COTS components that can handle larger samples. The motion control gantry allows inspection of sample sizes of up to approximately one cubic foot (.0.03 cubic meters). The system reduced to practice a CT-TDTHz system incorporating a COTS 80- ps/l-kHz waveform scanner. The incorporation of this scanner in the system allows acquisition of 3D slice data with better signal-to-noise using a COTS scanner rather than the gchirped h scanner. The system also reduced to practice a prototype for commercial CT systems for insulating materials where safety concerns cannot accommodate x-ray. A software script was written to automate the COTS software to collect and process TD-THz CT data.
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Effect of virtual reality on cognitive dysfunction in patients with brain tumor.
Yang, Seoyon; Chun, Min Ho; Son, Yu Ri
2014-12-01
To investigate whether virtual reality (VR) training will help the recovery of cognitive function in brain tumor patients. Thirty-eight brain tumor patients (19 men and 19 women) with cognitive impairment recruited for this study were assigned to either VR group (n=19, IREX system) or control group (n=19). Both VR training (30 minutes a day for 3 times a week) and computer-based cognitive rehabilitation program (30 minutes a day for 2 times) for 4 weeks were given to the VR group. The control group was given only the computer-based cognitive rehabilitation program (30 minutes a day for 5 days a week) for 4 weeks. Computerized neuropsychological tests (CNTs), Korean version of Mini-Mental Status Examination (K-MMSE), and Korean version of Modified Barthel Index (K-MBI) were used to evaluate cognitive function and functional status. The VR group showed improvements in the K-MMSE, visual and auditory continuous performance tests (CPTs), forward and backward digit span tests (DSTs), forward and backward visual span test (VSTs), visual and verbal learning tests, Trail Making Test type A (TMT-A), and K-MBI. The VR group showed significantly (p<0.05) better improvements than the control group in visual and auditory CPTs, backward DST and VST, and TMT-A after treatment. VR training can have beneficial effects on cognitive improvement when it is combined with computer-assisted cognitive rehabilitation. Further randomized controlled studies with large samples according to brain tumor type and location are needed to investigate how VR training improves cognitive impairment.
Effect of Virtual Reality on Cognitive Dysfunction in Patients With Brain Tumor
Yang, Seoyon; Son, Yu Ri
2014-01-01
Objective To investigate whether virtual reality (VR) training will help the recovery of cognitive function in brain tumor patients. Methods Thirty-eight brain tumor patients (19 men and 19 women) with cognitive impairment recruited for this study were assigned to either VR group (n=19, IREX system) or control group (n=19). Both VR training (30 minutes a day for 3 times a week) and computer-based cognitive rehabilitation program (30 minutes a day for 2 times) for 4 weeks were given to the VR group. The control group was given only the computer-based cognitive rehabilitation program (30 minutes a day for 5 days a week) for 4 weeks. Computerized neuropsychological tests (CNTs), Korean version of Mini-Mental Status Examination (K-MMSE), and Korean version of Modified Barthel Index (K-MBI) were used to evaluate cognitive function and functional status. Results The VR group showed improvements in the K-MMSE, visual and auditory continuous performance tests (CPTs), forward and backward digit span tests (DSTs), forward and backward visual span test (VSTs), visual and verbal learning tests, Trail Making Test type A (TMT-A), and K-MBI. The VR group showed significantly (p<0.05) better improvements than the control group in visual and auditory CPTs, backward DST and VST, and TMT-A after treatment. Conclusion VR training can have beneficial effects on cognitive improvement when it is combined with computer-assisted cognitive rehabilitation. Further randomized controlled studies with large samples according to brain tumor type and location are needed to investigate how VR training improves cognitive impairment. PMID:25566470
Development of an integrated semi-automated system for in vitro pharmacodynamic modelling.
Wang, Liangsu; Wismer, Michael K; Racine, Fred; Conway, Donald; Giacobbe, Robert A; Berejnaia, Olga; Kath, Gary S
2008-11-01
The aim of this study was to develop an integrated system for in vitro pharmacodynamic modelling of antimicrobials with greater flexibility, easier control and better accuracy than existing in vitro models. Custom-made bottle caps, fittings, valve controllers and a modified bench-top shaking incubator were used. A temperature-controlled automated sample collector was built. Computer software was developed to manage experiments and to control the entire system including solenoid pinch valves, peristaltic pumps and the sample collector. The system was validated by pharmacokinetic simulations of linezolid 600 mg infusion. The antibacterial effect of linezolid against multiple Staphylococcus aureus strains was also studied in this system. An integrated semi-automated bench-top system was built and validated. The temperature-controlled automated sample collector allowed unattended collection and temporary storage of samples. The system software reduced the labour necessary for many tasks and also improved the timing accuracy for performing simultaneous actions in multiple parallel experiments. The system was able to simulate human pharmacokinetics of linezolid 600 mg intravenous infusion accurately. A pharmacodynamic study of linezolid against multiple S. aureus strains with a range of MICs showed that the required 24 h free drug AUC/MIC ratio was approximately 30 in order to keep the organism counts at the same level as their initial inoculum and was about > or = 68 in order to achieve > 2 log(10) cfu/mL reduction in the in vitro model. The integrated semi-automated bench-top system provided the ability to overcome many of the drawbacks of existing in vitro models. It can be used for various simple or complicated pharmacokinetic/pharmacodynamic studies efficiently and conveniently.
Sawmill simulation: concepts and computer use
Hugh W. Reynolds; Charles J. Gatchell
1969-01-01
Product specifications were fed into a computer so that the yield of products from the same sample of logs could be determined for simulated sawing methods. Since different sawing patterns were tested on the same sample, variation among log samples was eliminated; hence, the statistical conclusions are very precise.
Architectures for Quantum Simulation Showing a Quantum Speedup
NASA Astrophysics Data System (ADS)
Bermejo-Vega, Juan; Hangleiter, Dominik; Schwarz, Martin; Raussendorf, Robert; Eisert, Jens
2018-04-01
One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy," referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional, dynamical, quantum simulators showing such a quantum speedup, building on intermediate problems involving nonadaptive, measurement-based, quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final-state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control, in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.
The multimedia computer for office-based patient education: a systematic review.
Wofford, James L; Smith, Edward D; Miller, David P
2005-11-01
Use of the multimedia computer for education is widespread in schools and businesses, and yet computer-assisted patient education is rare. In order to explore the potential use of computer-assisted patient education in the office setting, we performed a systematic review of randomized controlled trials (search date April 2004 using MEDLINE and Cochrane databases). Of the 26 trials identified, outcome measures included clinical indicators (12/26, 46.1%), knowledge retention (12/26, 46.1%), health attitudes (15/26, 57.7%), level of shared decision-making (5/26, 19.2%), health services utilization (4/26, 17.6%), and costs (5/26, 19.2%), respectively. Four trials targeted patients with breast cancer, but the clinical issues were otherwise diverse. Reporting of the testing of randomization (76.9%) and appropriate analysis of main effect variables (70.6%) were more common than reporting of a reliable randomization process (35.3%), blinding of outcomes assessment (17.6%), or sample size definition (29.4%). We concluded that the potential for improving the efficiency of the office through computer-assisted patient education has been demonstrated, but better proof of the impact on clinical outcomes is warranted before this strategy is accepted in the office setting.
Garrido, Gemma; Barrios, Maite; Penadés, Rafael; Enríquez, Maria; Garolera, Maite; Aragay, Núria; Pajares, Marta; Vallès, Vicenç; Delgado, Luis; Alberni, Joan; Faixa, Carlota; Vendrell, Josep M
2013-11-01
Quality of life (QoL) is an important outcome in the treatment of schizophrenia. Cognitive deficits have an impact on functional outcomes. Cognitive remediation therapy is emerging as a psychological intervention that targets cognitive impairment, but the effect of computer-assisted cognitive remediation on neuropsychology and social functioning and wellbeing remains unclear. The aim of the current study is to investigate the neurocognitive outcomes of computer-assisted cognitive remediation (CACR) therapy in a sample of schizophrenia patients, and to measure the quality of life and self-esteem as secondary outcomes. Sixty-seven people with schizophrenia were randomly assigned to computer-assisted cognitive remediation or an active control condition. The main outcomes were neuropsychological measures and secondary outcomes (self-esteem and quality of life). Measurements were recorded at baseline and post-treatment. The CACR therapy group improved in speed of processing, working memory and reasoning and problem-solving cognitive domains. QoL and self-esteem measures also showed significant improvements over time in this group. Computer-assisted cognitive remediation therapy for people with schizophrenia achieved improvements in neuropsychological performance and in QoL and self-esteem measurements. © 2013 Elsevier B.V. All rights reserved.
X-ray tomography characterization of density gradient aerogel in laser targets
NASA Astrophysics Data System (ADS)
Borisenko, L.; Orekhov, A.; Musgrave, C.; Nazarov, W.; Merkuliev, Yu; Borisenko, N.
2016-04-01
The low-density solid laser target characterization studies begun with the SkyScan 1074 computer microtomograph (CMT) [1, 2] are now continued with higher resolution of SkyScan 1174. The research is particularly focused on the possibility to obtain, control and measure precisely the gradient density polymers for laser target production. Repeatability of the samples and possibility to obtain stable gradients are analysed. The measurements were performed on the mm-scale divinyl benzene (DVB) rods.
2016-09-01
to both genetic algorithms and evolution strategies to achieve these goals. The results of this research offer a promising new set of modified ...abs_all.jsp?arnumber=203904 [163] Z. Michalewicz, C. Z. Janikow, and J. B. Krawczyk, “A modified genetic algo- rithm for optimal control problems...Available: http://arc.aiaa.org/doi/abs/10.2514/ 2.7053 375 [166] N. Yokoyama and S. Suzuki, “ Modified genetic algorithm for constrained trajectory
Robust tuning of robot control systems
NASA Technical Reports Server (NTRS)
Minis, I.; Uebel, M.
1992-01-01
The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.
Kazmerski, Lawrence L.
1990-01-01
A Method and apparatus for differential spectroscopic atomic-imaging is disclosed for spatial resolution and imaging for display not only individual atoms on a sample surface, but also bonding and the specific atomic species in such bond. The apparatus includes a scanning tunneling microscope (STM) that is modified to include photon biasing, preferably a tuneable laser, modulating electronic surface biasing for the sample, and temperature biasing, preferably a vibration-free refrigerated sample mounting stage. Computer control and data processing and visual display components are also included. The method includes modulating the electronic bias voltage with and without selected photon wavelengths and frequency biasing under a stabilizing (usually cold) bias temperature to detect bonding and specific atomic species in the bonds as the STM rasters the sample. This data is processed along with atomic spatial topography data obtained from the STM raster scan to create a real-time visual image of the atoms on the sample surface.
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
Active optical control system design of the SONG-China Telescope
NASA Astrophysics Data System (ADS)
Ye, Yu; Kou, Songfeng; Niu, Dongsheng; Li, Cheng; Wang, Guomin
2012-09-01
The standard SONG node structure of control system is presented. The active optical control system of the project is a distributed system, and a host computer and a slave intelligent controller are included. The host control computer collects the information from wave front sensor and sends commands to the slave computer to realize a closed loop model. For intelligent controller, a programmable logic controller (PLC) system is used. This system combines with industrial personal computer (IPC) and PLC to make up a control system with powerful and reliable.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Quantitative analysis of biomedical samples using synchrotron radiation microbeams
NASA Astrophysics Data System (ADS)
Ektessabi, Ali; Shikine, Shunsuke; Yoshida, Sohei
2001-07-01
X-ray fluorescence (XRF) using a synchrotron radiation (SR) microbeam was applied to investigate distributions and concentrations of elements in single neurons of patients with neurodegenerative diseases. In this paper we introduce a computer code that has been developed to quantify the trace elements and matrix elements at the single cell level. This computer code has been used in studies of several important neurodegenerative diseases such as Alzheimer's disease (AD), Parkinson's disease (PD) and parkinsonism-dementia complex (PDC), as well as in basic biological experiments to determine the elemental changes in cells due to incorporation of foreign metal elements. The substantial nigra (SN) tissue obtained from the autopsy specimens of patients with Guamanian parkinsonism-dementia complex (PDC) and control cases were examined. Quantitative XRF analysis showed that neuromelanin granules of Parkinsonian SN contained higher levels of Fe than those of the control. The concentrations were in the ranges of 2300-3100 ppm and 2000-2400 ppm respectively. On the contrary, Zn and Ni in neuromelanin granules of SN tissue from the PDC case were lower than those of the control. Especially Zn was less than 40 ppm in SN tissue from the PDC case while it was 560-810 ppm in the control. These changes are considered to be closely related to the neuro-degeneration and cell death.
Terahertz Computed Tomography of NASA Thermal Protection System Materials
NASA Technical Reports Server (NTRS)
Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.
2011-01-01
A terahertz axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three-dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 cubic meters (1 cubic foot) with no safety concerns as for x-ray computed tomography. In this study, the system is evaluated for its ability to detect and characterize flat bottom holes, drilled holes, and embedded voids in foam materials utilized as thermal protection on the external fuel tanks for the Space Shuttle. X-ray micro-computed tomography was also performed on the samples to compare against the terahertz computed tomography results and better define embedded voids. Limits of detectability based on depth and size for the samples used in this study are loosely defined. Image sharpness and morphology characterization ability for terahertz computed tomography are qualitatively described.
Automatic cortical thickness analysis on rodent brain
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Ehlers, Cindy; Crews, Fulton; Niethammer, Marc; Budin, Francois; Paniagua, Beatriz; Sulik, Kathy; Johns, Josephine; Styner, Martin; Oguz, Ipek
2011-03-01
Localized difference in the cortex is one of the most useful morphometric traits in human and animal brain studies. There are many tools and methods already developed to automatically measure and analyze cortical thickness for the human brain. However, these tools cannot be directly applied to rodent brains due to the different scales; even adult rodent brains are 50 to 100 times smaller than humans. This paper describes an algorithm for automatically measuring the cortical thickness of mouse and rat brains. The algorithm consists of three steps: segmentation, thickness measurement, and statistical analysis among experimental groups. The segmentation step provides the neocortex separation from other brain structures and thus is a preprocessing step for the thickness measurement. In the thickness measurement step, the thickness is computed by solving a Laplacian PDE and a transport equation. The Laplacian PDE first creates streamlines as an analogy of cortical columns; the transport equation computes the length of the streamlines. The result is stored as a thickness map over the neocortex surface. For the statistical analysis, it is important to sample thickness at corresponding points. This is achieved by the particle correspondence algorithm which minimizes entropy between dynamically moving sample points called particles. Since the computational cost of the correspondence algorithm may limit the number of corresponding points, we use thin-plate spline based interpolation to increase the number of corresponding sample points. As a driving application, we measured the thickness difference to assess the effects of adolescent intermittent ethanol exposure that persist into adulthood and performed t-test between the control and exposed rat groups. We found significantly differing regions in both hemispheres.
Event-Triggered Model Predictive Control for Embedded Artificial Pancreas Systems.
Chakrabarty, Ankush; Zavitsanou, Stamatina; Doyle, Francis J; Dassau, Eyal
2018-03-01
The development of artificial pancreas (AP) technology for deployment in low-energy, embedded devices is contingent upon selecting an efficient control algorithm for regulating glucose in people with type 1 diabetes mellitus. In this paper, we aim to lower the energy consumption of the AP by reducing controller updates, that is, the number of times the decision-making algorithm is invoked to compute an appropriate insulin dose. Physiological insights into glucose management are leveraged to design an event-triggered model predictive controller (MPC) that operates efficiently, without compromising patient safety. The proposed event-triggered MPC is deployed on a wearable platform. Its robustness to latent hypoglycemia, model mismatch, and meal misinformation is tested, with and without meal announcement, on the full version of the US-FDA accepted UVA/Padova metabolic simulator. The event-based controller remains on for 18 h of 41 h in closed loop with unannounced meals, while maintaining glucose in 70-180 mg/dL for 25 h, compared to 27 h for a standard MPC controller. With meal announcement, the time in 70-180 mg/dL is almost identical, with the controller operating a mere 25.88% of the time in comparison with a standard MPC. A novel control architecture for AP systems enables safe glycemic regulation with reduced processor computations. Our proposed framework integrated seamlessly with a wide variety of popular MPC variants reported in AP research, customizes tradeoff between glycemic regulation and efficacy according to prior design specifications, and eliminates judicious prior selection of controller sampling times.
Simulation of Liquid Injection Thrust Vector Control for Mars Ascent Vehicle
NASA Technical Reports Server (NTRS)
Gudenkauf, Jared
2017-01-01
The Jet Propulsion Laboratory is currently in the initial design phase for a potential Mars Ascent Vehicle; which will be landed on Mars, stay on the surface for period of time, collect samples from the Mars 2020 rover, and then lift these samples into orbit around Mars. The engineers at JPL have down selected to a hybrid wax-based fuel rocket using a liquid oxidizer based on nitrogen tetroxide, or a Mixed Oxide of Nitrogen. To lower the gross lift-off mass of the vehicle the thrust vector control system will use liquid injection of the oxidizer to deflect the thrust of the main nozzle instead of using a gimbaled nozzle. The disadvantage of going with the liquid injection system is the low technology readiness level with a hybrid rocket. Presented in this paper is an effort to simulate the Mars Ascent Vehicle hybrid rocket nozzle and liquid injection thrust vector control system using the computational fluid dynamic flow solver Loci/Chem. This effort also includes determining the sensitivity of the thrust vector control system to a number of different design variables for the injection ports; including axial location, number of adjacent ports, injection angle, and distance between the ports.
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
A hybrid analog-digital phase-locked loop for frequency mode non-contact scanning probe microscopy.
Mehta, M M; Chandrasekhar, V
2014-01-01
Non-contact scanning probe microscopy (SPM) has developed into a powerful technique to image many different properties of samples. The conventional method involves monitoring the amplitude, phase, or frequency of a cantilever oscillating at or near its resonant frequency as it is scanned across the surface of a sample. For high Q factor cantilevers, monitoring the resonant frequency is the preferred method in order to obtain reasonable scan times. This can be done by using a phase-locked-loop (PLL). PLLs can be obtained as commercial integrated circuits, but these do not have the frequency resolution required for SPM. To increase the resolution, all-digital PLLs requiring sophisticated digital signal processors or field programmable gate arrays have also been implemented. We describe here a hybrid analog/digital PLL where most of the components are implemented using discrete analog integrated circuits, but the frequency resolution is provided by a direct digital synthesis chip controlled by a simple peripheral interface controller (PIC) microcontroller. The PLL has excellent frequency resolution and noise, and can be controlled and read by a computer via a universal serial bus connection.
A hybrid analog-digital phase-locked loop for frequency mode non-contact scanning probe microscopy
NASA Astrophysics Data System (ADS)
Mehta, M. M.; Chandrasekhar, V.
2014-01-01
Non-contact scanning probe microscopy (SPM) has developed into a powerful technique to image many different properties of samples. The conventional method involves monitoring the amplitude, phase, or frequency of a cantilever oscillating at or near its resonant frequency as it is scanned across the surface of a sample. For high Q factor cantilevers, monitoring the resonant frequency is the preferred method in order to obtain reasonable scan times. This can be done by using a phase-locked-loop (PLL). PLLs can be obtained as commercial integrated circuits, but these do not have the frequency resolution required for SPM. To increase the resolution, all-digital PLLs requiring sophisticated digital signal processors or field programmable gate arrays have also been implemented. We describe here a hybrid analog/digital PLL where most of the components are implemented using discrete analog integrated circuits, but the frequency resolution is provided by a direct digital synthesis chip controlled by a simple peripheral interface controller (PIC) microcontroller. The PLL has excellent frequency resolution and noise, and can be controlled and read by a computer via a universal serial bus connection.
Development of graphene oxide materials with controllably modified optical properties
NASA Astrophysics Data System (ADS)
Naumov, Anton; Galande, Charudatta; Mohite, Aditya; Ajayan, Pulickel; Weisman, R. Bruce
2015-03-01
One of the major current goals in graphene research is modifying its optical and electronic properties through controllable generation of band gaps. To achieve this, we have studied the changes in optical properties of reduced graphene oxide (RGO) in water suspension upon the exposure to ozone. Ozonation for the periods of 5 to 35 minutes has caused a dramatic bleaching of its absorption and the concurrent appearance of strong visible fluorescence in previously nonemissive samples. These observed spectral changes suggest a functionalization-induced band gap opening. The sample fluorescence induced by ozonation was found to be highly pH-dependent: sharp and structured emission features resembling the spectra of molecular fluorophores were present at basic pH values, but this emission reversibly broadened and red-shifted in acidic conditions. These findings are consistent with excited state protonation of the emitting species in acidic media. Oxygen-containing addends resulting from the ozonation were detected by XPS and FTIR spectroscopy and related to optical transitions in localized graphene oxide fluorophores by computational modeling. Further research will be directed toward producing graphene-based optoelectronic devices with tailored and controllable optical properties.
Mair, Grant; von Kummer, Rüdiger; Adami, Alessandro; White, Philip M.; Adams, Matthew E.; Yan, Bernard; Demchuk, Andrew M.; Farrall, Andrew J.; Sellar, Robin J.; Sakka, Eleni; Palmer, Jeb; Perry, David; Lindley, Richard I.; Sandercock, Peter A.G.
2017-01-01
Background and Purpose— Computed tomographic angiography and magnetic resonance angiography are used increasingly to assess arterial patency in patients with ischemic stroke. We determined which baseline angiography features predict response to intravenous thrombolytics in ischemic stroke using randomized controlled trial data. Methods— We analyzed angiograms from the IST-3 (Third International Stroke Trial), an international, multicenter, prospective, randomized controlled trial of intravenous alteplase. Readers, masked to clinical, treatment, and outcome data, assessed prerandomization computed tomographic angiography and magnetic resonance angiography for presence, extent, location, and completeness of obstruction and collaterals. We compared angiography findings to 6-month functional outcome (Oxford Handicap Scale) and tested for interactions with alteplase, using ordinal regression in adjusted analyses. We also meta-analyzed all available angiography data from other randomized controlled trials of intravenous thrombolytics. Results— In IST-3, 300 patients had prerandomization angiography (computed tomographic angiography=271 and magnetic resonance angiography=29). On multivariable analysis, more extensive angiographic obstruction and poor collaterals independently predicted poor outcome (P<0.01). We identified no significant interaction between angiography findings and alteplase effect on Oxford Handicap Scale (P≥0.075) in IST-3. In meta-analysis (5 trials of alteplase or desmoteplase, including IST-3, n=591), there was a significantly increased benefit of thrombolytics on outcome (odds ratio>1 indicates benefit) in patients with (odds ratio, 2.07; 95% confidence interval, 1.18–3.64; P=0.011) versus without (odds ratio, 0.88; 95% confidence interval, 0.58–1.35; P=0.566) arterial obstruction (P for interaction 0.017). Conclusions— Intravenous thrombolytics provide benefit to stroke patients with computed tomographic angiography or magnetic resonance angiography evidence of arterial obstruction, but the sample was underpowered to demonstrate significant treatment benefit or harm among patients with apparently patent arteries. Clinical Trial Registration— URL: http://www.isrctn.com. Unique identifier: ISRCTN25765518. PMID:28008093
Mair, Grant; von Kummer, Rüdiger; Adami, Alessandro; White, Philip M; Adams, Matthew E; Yan, Bernard; Demchuk, Andrew M; Farrall, Andrew J; Sellar, Robin J; Sakka, Eleni; Palmer, Jeb; Perry, David; Lindley, Richard I; Sandercock, Peter A G; Wardlaw, Joanna M
2017-02-01
Computed tomographic angiography and magnetic resonance angiography are used increasingly to assess arterial patency in patients with ischemic stroke. We determined which baseline angiography features predict response to intravenous thrombolytics in ischemic stroke using randomized controlled trial data. We analyzed angiograms from the IST-3 (Third International Stroke Trial), an international, multicenter, prospective, randomized controlled trial of intravenous alteplase. Readers, masked to clinical, treatment, and outcome data, assessed prerandomization computed tomographic angiography and magnetic resonance angiography for presence, extent, location, and completeness of obstruction and collaterals. We compared angiography findings to 6-month functional outcome (Oxford Handicap Scale) and tested for interactions with alteplase, using ordinal regression in adjusted analyses. We also meta-analyzed all available angiography data from other randomized controlled trials of intravenous thrombolytics. In IST-3, 300 patients had prerandomization angiography (computed tomographic angiography=271 and magnetic resonance angiography=29). On multivariable analysis, more extensive angiographic obstruction and poor collaterals independently predicted poor outcome (P<0.01). We identified no significant interaction between angiography findings and alteplase effect on Oxford Handicap Scale (P≥0.075) in IST-3. In meta-analysis (5 trials of alteplase or desmoteplase, including IST-3, n=591), there was a significantly increased benefit of thrombolytics on outcome (odds ratio>1 indicates benefit) in patients with (odds ratio, 2.07; 95% confidence interval, 1.18-3.64; P=0.011) versus without (odds ratio, 0.88; 95% confidence interval, 0.58-1.35; P=0.566) arterial obstruction (P for interaction 0.017). Intravenous thrombolytics provide benefit to stroke patients with computed tomographic angiography or magnetic resonance angiography evidence of arterial obstruction, but the sample was underpowered to demonstrate significant treatment benefit or harm among patients with apparently patent arteries. URL: http://www.isrctn.com. Unique identifier: ISRCTN25765518. © 2016 The Authors.
Springvloet, Linda; Lechner, Lilian; de Vries, Hein; Candel, Math J J M; Oenema, Anke
2015-01-19
Web-based, computer-tailored nutrition education interventions can be effective in modifying self-reported dietary behaviors. Traditional computer-tailored programs primarily targeted individual cognitions (knowledge, awareness, attitude, self-efficacy). Tailoring on additional variables such as self-regulation processes and environmental-level factors (the home food environment arrangement and perception of availability and prices of healthy food products in supermarkets) may improve efficacy and effect sizes (ES) of Web-based computer-tailored nutrition education interventions. This study evaluated the short- and medium-term efficacy and educational differences in efficacy of a cognitive and environmental feedback version of a Web-based computer-tailored nutrition education intervention on self-reported fruit, vegetable, high-energy snack, and saturated fat intake compared to generic nutrition information in the total sample and among participants who did not comply with dietary guidelines (the risk groups). A randomized controlled trial was conducted with a basic (tailored intervention targeting individual cognition and self-regulation processes; n=456), plus (basic intervention additionally targeting environmental-level factors; n=459), and control (generic nutrition information; n=434) group. Participants were recruited from the general population and randomly assigned to a study group. Self-reported fruit, vegetable, high-energy snack, and saturated fat intake were assessed at baseline and at 1- (T1) and 4-months (T2) postintervention using online questionnaires. Linear mixed model analyses examined group differences in change over time. Educational differences were examined with group×time×education interaction terms. In the total sample, the basic (T1: ES=-0.30; T2: ES=-0.18) and plus intervention groups (T1: ES=-0.29; T2: ES=-0.27) had larger decreases in high-energy snack intake than the control group. The basic version resulted in a larger decrease in saturated fat intake than the control intervention (T1: ES=-0.19; T2: ES=-0.17). In the risk groups, the basic version caused larger decreases in fat (T1: ES=-0.28; T2: ES=-0.28) and high-energy snack intake (T1: ES=-0.34; T2: ES=-0.20) than the control intervention. The plus version resulted in a larger increase in fruit (T1: ES=0.25; T2: ES=0.37) and a larger decrease in high-energy snack intake (T1: ES=-0.38; T2: ES=-0.32) than the control intervention. For high-energy snack intake, educational differences were found. Stratified analyses showed that the plus version was most effective for high-educated participants. Both intervention versions were more effective in improving some of the self-reported dietary behaviors than generic nutrition information, especially in the risk groups, among both higher- and lower-educated participants. For fruit intake, only the plus version was more effective than providing generic nutrition information. Although feasible, incorporating environmental-level information is time-consuming. Therefore, the basic version may be more feasible for further implementation, although inclusion of feedback on the arrangement of the home food environment and on availability and prices may be considered for fruit and, for high-educated people, for high-energy snack intake. Netherlands Trial Registry NTR3396; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3396 (Archived by WebCite at http://www.webcitation.org/6VNZbdL6w).
Lechner, Lilian; de Vries, Hein; Candel, Math JJM; Oenema, Anke
2015-01-01
Background Web-based, computer-tailored nutrition education interventions can be effective in modifying self-reported dietary behaviors. Traditional computer-tailored programs primarily targeted individual cognitions (knowledge, awareness, attitude, self-efficacy). Tailoring on additional variables such as self-regulation processes and environmental-level factors (the home food environment arrangement and perception of availability and prices of healthy food products in supermarkets) may improve efficacy and effect sizes (ES) of Web-based computer-tailored nutrition education interventions. Objective This study evaluated the short- and medium-term efficacy and educational differences in efficacy of a cognitive and environmental feedback version of a Web-based computer-tailored nutrition education intervention on self-reported fruit, vegetable, high-energy snack, and saturated fat intake compared to generic nutrition information in the total sample and among participants who did not comply with dietary guidelines (the risk groups). Methods A randomized controlled trial was conducted with a basic (tailored intervention targeting individual cognition and self-regulation processes; n=456), plus (basic intervention additionally targeting environmental-level factors; n=459), and control (generic nutrition information; n=434) group. Participants were recruited from the general population and randomly assigned to a study group. Self-reported fruit, vegetable, high-energy snack, and saturated fat intake were assessed at baseline and at 1- (T1) and 4-months (T2) postintervention using online questionnaires. Linear mixed model analyses examined group differences in change over time. Educational differences were examined with group×time×education interaction terms. Results In the total sample, the basic (T1: ES=–0.30; T2: ES=–0.18) and plus intervention groups (T1: ES=–0.29; T2: ES=–0.27) had larger decreases in high-energy snack intake than the control group. The basic version resulted in a larger decrease in saturated fat intake than the control intervention (T1: ES=–0.19; T2: ES=–0.17). In the risk groups, the basic version caused larger decreases in fat (T1: ES=–0.28; T2: ES=–0.28) and high-energy snack intake (T1: ES=–0.34; T2: ES=–0.20) than the control intervention. The plus version resulted in a larger increase in fruit (T1: ES=0.25; T2: ES=0.37) and a larger decrease in high-energy snack intake (T1: ES=–0.38; T2: ES=–0.32) than the control intervention. For high-energy snack intake, educational differences were found. Stratified analyses showed that the plus version was most effective for high-educated participants. Conclusions Both intervention versions were more effective in improving some of the self-reported dietary behaviors than generic nutrition information, especially in the risk groups, among both higher- and lower-educated participants. For fruit intake, only the plus version was more effective than providing generic nutrition information. Although feasible, incorporating environmental-level information is time-consuming. Therefore, the basic version may be more feasible for further implementation, although inclusion of feedback on the arrangement of the home food environment and on availability and prices may be considered for fruit and, for high-educated people, for high-energy snack intake. Trial Registration Netherlands Trial Registry NTR3396; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3396 (Archived by WebCite at http://www.webcitation.org/6VNZbdL6w). PMID:25599828
A Functional Analytic Approach To Computer-Interactive Mathematics
2005-01-01
Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed. PMID:15898471
NASA Astrophysics Data System (ADS)
Sayab, Mohammad; Miettinen, Arttu; Aerden, Domingo; Karell, Fredrik
2017-10-01
We applied X-ray computed microtomography (μ-CT) in combination with anisotropy of magnetic susceptibility (AMS) analysis to study metamorphic rock fabrics in an oriented drill core sample of pyrite-pyrrhotite-quartz-mica schist. The sample is extracted from the Paleoproterozoic Martimo metasedimentary belt of northern Finland. The μ-CT resolves the spatial distribution, shape and orientation of 25,920 pyrrhotite and 153 pyrite grains localized in mm-thick metapelitic laminae. Together with microstructural analysis, the μ-CT allows us to interpret the prolate symmetry of the AMS ellipsoid and its relationship to the deformation history. AMS of the sample is controlled by pyrrhotite porphyroblasts that grew syntectonically during D1 in subhorizontal microlithons. The short and intermediate axes (K3 and K2) of the AMS ellipsoid interchanged positions during a subsequent deformation (D2) that intensely crenulated S1 and deformed pyrrhotite, while the long axes (K1) maintained a constant position parallel to the maximum stretching direction. However, it is likely that all the three AMS axes switched, similar to the three principal axes of the shape ellipsoid of pyrite porphyroblasts from D1 to D2. The superposition of D1 and D2 produced a type-2 fold interference pattern.
A functional analytic approach to computer-interactive mathematics.
Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M; Ninness, Sharon K
2005-01-01
Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed.
Ueki, Shigeharu; Kayaba, Hiroyuki; Tomita, Noriko; Kobayashi, Noriko; Takahashi, Tomoe; Obara, Toshikage; Takeda, Masahide; Moritoki, Yuki; Itoga, Masamichi; Ito, Wataru; Ohsaga, Atsushi; Kondoh, Katsuyuki; Chihara, Junichi
2011-04-01
The active involvement of hospital laboratory in surveillance is crucial to the success of nosocomial infection control. The recent dramatic increase of antimicrobial-resistant organisms and their spread into the community suggest that the infection control strategy of independent medical institutions is insufficient. To share the clinical data and surveillance in our local medical region, we developed a microbiology data warehouse for networking hospital laboratories in Akita prefecture. This system, named Akita-ReNICS, is an easy-to-use information management system designed to compare, track, and report the occurrence of antimicrobial-resistant organisms. Participating laboratories routinely transfer their coded and formatted microbiology data to ReNICS server located at Akita University Hospital from their health care system's clinical computer applications over the internet. We established the system to automate the statistical processes, so that the participants can access the server to monitor graphical data in the manner they prefer, using their own computer's browser. Furthermore, our system also provides the documents server, microbiology and antimicrobiotic database, and space for long-term storage of microbiological samples. Akita-ReNICS could be a next generation network for quality improvement of infection control.
Hunt, G B; Luff, J A; Daniel, L; Van den Bergh, R
2013-11-01
The aims of this prospective study were to quantify steatosis in dogs with congenital portosystemic shunts (CPS) using a fat-specific stain, to compare the amount of steatosis in different lobes of the liver, and to evaluate intra- and interobserver variability in lipid point counting. Computer-assisted point counting of lipid droplets was undertaken following Oil Red O staining in 21 dogs with congenital portosystemic shunts and 9 control dogs. Dogs with congenital portosystemic shunts had significantly more small lipid droplets (<6 μ) than control dogs (P = .0013 and .0002, respectively). There was no significant difference in steatosis between liver lobes for either control dogs and CPS dogs. Significant differences were seen between observers for the number of large lipid droplets (>9 μ) and lipogranulomas per tissue point (P = .023 and .01, respectively). In conclusion, computer-assisted counting of lipid droplets following Oil Red O staining of liver biopsy samples allows objective measurement and detection of significant differences between dogs with CPS and normal dogs. This method will allow future evaluation of the relationship between different presentations of CPS (anatomy, age, breed) and lipidosis, as well as the impact of hepatic lipidosis on outcomes following surgical shunt attenuation.
The in vivo wear resistance of 12 composite resins.
Lang, B R; Bloem, T J; Powers, J M; Wang, R F
1992-09-01
The in vivo wear resistance of 12 composite resins were compared with an amalgam control using the Latin Square experimental design. Sixteen edentulous patients wearing specially designed complete dentures formed the experimental population. The Michigan Computer Graphics Measurement System was used to digitize the surface of the control and composite resin samples before and after 3-month test periods to obtain wear data. The 12 composite resins selected for this investigation based on their published composite classification types were seven fine particle composites, three blends, and two microfilled composite resins. The Latin Square experimental design was found to be valid with the factor of material being statistically different at the 5% level of significance. Wear was computed as volume loss (mm3/mm2), and all of the composites studied had more wear than the amalgam control (P = .001). After 3 months, the mean (error) of wear of the amalgam was 0.028 (0.006). Means (error) of wear for the 12 composites were ranked from most to least wear by mean wear volume loss. The absence of any relationship between mean wear volume loss and the volume percentage filler was confirmed by the correlation coefficient r = -0.158.
One GHz digitizer for space based laser altimeter
NASA Technical Reports Server (NTRS)
Staples, Edward J.
1991-01-01
This is the final report for the research and development of the one GHz digitizer for space based laser altimeter. A feasibility model was designed, built, and tested. Only partial testing of essential functions of the digitizer was completed. Hybrid technology was incorporated which allows analog storage (memory) of the digitally sampled data. The actual sampling rate is 62.5 MHz, but executed in 16 parallel channels, to provide an effective sampling rate of one GHz. The average power consumption of the one GHz digitizer is not more than 1.5 Watts. A one GHz oscillator is incorporated for timing purposes. This signal is also made available externally for system timing. A software package was also developed for internal use (controls, commands, etc.) and for data communication with the host computer. The digitizer is equipped with an onboard microprocessor for this purpose.
ERIC Educational Resources Information Center
Motamedi, Vahid; Yaghoubi, Razeyah Mohagheghyan
2015-01-01
This study aimed at investigating the relationship between computer game use and spatial abilities among high school students. The sample consisted of 300 high school male students selected through multi-stage cluster sampling. Data gathering tools consisted of a researcher made questionnaire (to collect information on computer game usage) and the…
Susukida, Ryoko; Crum, Rosa M; Ebnesajjad, Cyrus; Stuart, Elizabeth A; Mojtabai, Ramin
2017-07-01
To compare randomized controlled trial (RCT) sample treatment effects with the population effects of substance use disorder (SUD) treatment. Statistical weighting was used to re-compute the effects from 10 RCTs such that the participants in the trials had characteristics that resembled those of patients in the target populations. Multi-site RCTs and usual SUD treatment settings in the United States. A total of 3592 patients in 10 RCTs and 1 602 226 patients from usual SUD treatment settings between 2001 and 2009. Three outcomes of SUD treatment were examined: retention, urine toxicology and abstinence. We weighted the RCT sample treatment effects using propensity scores representing the conditional probability of participating in RCTs. Weighting the samples changed the significance of estimated sample treatment effects. Most commonly, positive effects of trials became statistically non-significant after weighting (three trials for retention and urine toxicology and one trial for abstinence); also, non-significant effects became significantly positive (one trial for abstinence) and significantly negative effects became non-significant (two trials for abstinence). There was suggestive evidence of treatment effect heterogeneity in subgroups that are under- or over-represented in the trials, some of which were consistent with the differences in average treatment effects between weighted and unweighted results. The findings of randomized controlled trials (RCTs) for substance use disorder treatment do not appear to be directly generalizable to target populations when the RCT samples do not reflect adequately the target populations and there is treatment effect heterogeneity across patient subgroups. © 2017 Society for the Study of Addiction.
Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min
2015-06-01
The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.
Computer control of a microgravity mammalian cell bioreactor
NASA Technical Reports Server (NTRS)
Hall, William A.
1987-01-01
The initial steps taken in developing a completely menu driven and totally automated computer control system for a bioreactor are discussed. This bioreactor is an electro-mechanical cell growth system cell requiring vigorous control of slowly changing parameters, many of which are so dynamically interactive that computer control is a necessity. The process computer will have two main functions. First, it will provide continuous environmental control utilizing low signal level transducers as inputs and high powered control devices such as solenoids and motors as outputs. Secondly, it will provide continuous environmental monitoring, including mass data storage and periodic data dumps to a supervisory computer.