Performance Improvement of Raman Distributed Temperature System by Using Noise Suppression
NASA Astrophysics Data System (ADS)
Li, Jian; Li, Yunting; Zhang, Mingjiang; Liu, Yi; Zhang, Jianzhong; Yan, Baoqiang; Wang, Dong; Jin, Baoquan
2018-06-01
In Raman distributed temperature system, the key factor for performance improvement is noise suppression, which seriously affects the sensing distance and temperature accuracy. Therefore, we propose and experimentally demonstrate dynamic noise difference algorithm and wavelet transform modulus maximum (WTMM) to de-noising Raman anti-Stokes signal. Experimental results show that the sensing distance can increase from 3 km to 11.5 km and the temperature accuracy increases to 1.58 °C at the sensing distance of 10.4 km.
Accuracy of Person-Fit Statistics: A Monte Carlo Study of the Influence of Aberrance Rates
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2011-01-01
Using a Monte Carlo experimental design, this research examined the relationship between answer patterns' aberrance rates and person-fit statistics (PFS) accuracy. It was observed that as the aberrance rate increased, the detection rates of PFS also increased until, in some situations, a peak was reached and then the detection rates of PFS…
Reversing the Course of Forgetting
ERIC Educational Resources Information Center
White, K. Geoffrey; Brown, Glenn S.
2011-01-01
Forgetting functions were generated for pigeons in a delayed matching-to-sample task, in which accuracy decreased with increasing retention-interval duration. In baseline training with dark retention intervals, accuracy was high overall. Illumination of the experimental chamber by a houselight during the retention interval impaired performance…
Koffarnus, Mikhail N; Katz, Jonathan L
2011-02-01
Increased signal-detection accuracy on the 5-choice serial reaction time (5-CSRT) task has been shown with drugs that are useful clinically in treating attention deficit hyperactivity disorder (ADHD), but these increases are often small and/or unreliable. By reducing the reinforcer frequency, it may be possible to increase the sensitivity of this task to pharmacologically induced improvements in accuracy. Rats were trained to respond on the 5-CSRT task on a fixed ratio (FR) 1, FR 3, or FR 10 schedule of reinforcement. Drugs that were and were not expected to enhance performance were then administered before experimental sessions. Significant increases in accuracy of signal detection were not typically obtained under the FR 1 schedule with any drug. However, d-amphetamine, methylphenidate, and nicotine typically increased accuracy under the FR 3 and FR 10 schedules. Increasing the FR requirement in the 5-CSRT task increases the likelihood of a positive result with clinically effective drugs, and may more closely resemble conditions in children with attention deficits.
Increased Accuracy of Ligand Sensing by Receptor Internalization and Lateral Receptor Diffusion
NASA Astrophysics Data System (ADS)
Aquino, Gerardo; Endres, Robert
2010-03-01
Many types of cells can sense external ligand concentrations with cell-surface receptors at extremely high accuracy. Interestingly, ligand-bound receptors are often internalized, a process also known as receptor-mediated endocytosis. While internalization is involved in a vast number of important functions for the life of a cell, it was recently also suggested to increase the accuracy of sensing ligand as overcounting of the same ligand molecules is reduced. A similar role may be played by receptor diffusion om the cell membrane. Fast, lateral receptor diffusion is known to be relevant in neurotransmission initiated by release of neurotransmitter glutamate in the synaptic cleft between neurons. By binding ligand and removal by diffusion from the region of release of the neurotransmitter, diffusing receptors can be reasonably expected to reduce the local overcounting of the same ligand molecules in the region of signaling. By extending simple ligand-receptor models to out-of-equilibrium thermodynamics, we show that both receptor internalization and lateral diffusion increase the accuracy with which cells can measure ligand concentrations in the external environment. We confirm this with our model and give quantitative predictions for experimental parameters values. We give quantitative predictions, which compare favorably to experimental data of real receptors.
Attention bias for chocolate increases chocolate consumption--an attention bias modification study.
Werthmann, Jessica; Field, Matt; Roefs, Anne; Nederkoorn, Chantal; Jansen, Anita
2014-03-01
The current study examined experimentally whether a manipulated attention bias for food cues increases craving, chocolate intake and motivation to search for hidden chocolates. To test the effect of attention for food on subsequent chocolate intake, attention for chocolate was experimentally modified by instructing participants to look at chocolate stimuli ("attend chocolate" group) or at non-food stimuli ("attend shoes" group) during a novel attention bias modification task (antisaccade task). Chocolate consumption, changes in craving and search time for hidden chocolates were assessed. Eye-movement recordings were used to monitor the accuracy during the experimental attention modification task as possible moderator of effects. Regression analyses were conducted to test the effect of attention modification and modification accuracy on chocolate intake, craving and motivation to search for hidden chocolates. Results showed that participants with higher accuracy (+1 SD), ate more chocolate when they had to attend to chocolate and ate less chocolate when they had to attend to non-food stimuli. In contrast, for participants with lower accuracy (-1 SD), the results were exactly reversed. No effects of the experimental attention modification on craving or search time for hidden chocolates were found. We used chocolate as food stimuli so it remains unclear how our findings generalize to other types of food. These findings demonstrate further evidence for a link between attention for food and food intake, and provide an indication about the direction of this relationship. Copyright © 2013 Elsevier Ltd. All rights reserved.
Reversing the Course of Forgetting
White, K. Geoffrey; Brown, Glenn S
2011-01-01
Forgetting functions were generated for pigeons in a delayed matching-to-sample task, in which accuracy decreased with increasing retention-interval duration. In baseline training with dark retention intervals, accuracy was high overall. Illumination of the experimental chamber by a houselight during the retention interval impaired performance accuracy by increasing the rate of forgetting. In novel conditions, the houselight was lit at the beginning of a retention interval and then turned off partway through the retention interval. Accuracy was low at the beginning of the retention interval and then increased later in the interval. Thus the course of forgetting was reversed. Such a dissociation of forgetting from the passage of time is consistent with an interference account in which attention or stimulus control switches between the remembering task and extraneous events. PMID:21909163
NASA Astrophysics Data System (ADS)
Fedonin, O. N.; Petreshin, D. I.; Ageenko, A. V.
2018-03-01
In the article, the issue of increasing a CNC lathe accuracy by compensating for the static and dynamic errors of the machine is investigated. An algorithm and a diagnostic system for a CNC machine tool are considered, which allows determining the errors of the machine for their compensation. The results of experimental studies on diagnosing and improving the accuracy of a CNC lathe are presented.
Social Power Increases Interoceptive Accuracy
Moeini-Jazani, Mehrad; Knoeferle, Klemens; de Molière, Laura; Gatti, Elia; Warlop, Luk
2017-01-01
Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research. PMID:28824501
Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions.
Tang, Jun; Zhang, Nan; Li, Dalin; Wang, Fei; Zhang, Binzhen; Wang, Chenguang; Shen, Chong; Ren, Jianbin; Xue, Chenyang; Liu, Jun
2016-07-11
A novel method based on Pulse Coupled Neural Network(PCNN) algorithm for the highly accurate and robust compass information calculation from the polarized skylight imaging is proposed,which showed good accuracy and reliability especially under cloudy weather,surrounding shielding and moon light. The degree of polarization (DOP) combined with the angle of polarization (AOP), calculated from the full sky polarization image, were used for the compass information caculation. Due to the high sensitivity to the environments, DOP was used to judge the destruction of polarized information using the PCNN algorithm. Only areas with high accuracy of AOP were kept after the DOP PCNN filtering, thereby greatly increasing the compass accuracy and robustness. From the experimental results, it was shown that the compass accuracy was 0.1805° under clear weather. This method was also proven to be applicable under conditions of shielding by clouds, trees and buildings, with a compass accuracy better than 1°. With weak polarization information sources, such as moonlight, this method was shown experimentally to have an accuracy of 0.878°.
Improvement of the accuracy of noise measurements by the two-amplifier correlation method.
Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P
2013-10-01
We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun
2017-08-01
A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua
2015-01-01
An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067
Investigation of Portevin-Le Chatelier effect in 5456 Al-based alloy using digital image correlation
NASA Astrophysics Data System (ADS)
Cheng, Teng; Xu, Xiaohai; Cai, Yulong; Fu, Shihua; Gao, Yue; Su, Yong; Zhang, Yong; Zhang, Qingchuan
2015-02-01
A variety of experimental methods have been proposed for Portevin-Le Chatelier (PLC) effect. They mainly focused on the in-plane deformation. In order to achieve the high-accuracy measurement, three-dimensional digital image correlation (3D-DIC) was employed in this work to investigate the PLC effect in 5456 Al-based alloy. The temporal and spatial evolutions of deformation in the full field of specimen surface were observed. The large deformation of localized necking was determined experimentally. The distributions of out-of-plane displacement over the loading procedure were also obtained. Furthermore, a comparison of measurement accuracy between two-dimensional digital image correlation (2D-DIC) and 3D-DIC was also performed. Due to the theoretical restriction, the measurement accuracy of 2D-DIC decreases with the increase of deformation. A maximum discrepancy of about 20% with 3D-DIC was observed in this work. Therefore, 3D-DIC is actually more essential for the high-accuracy investigation of PLC effect.
A Short Distance CW-Radar Sensor at 77 GHz in LTCC for Industrial Applications
NASA Astrophysics Data System (ADS)
Rusch, Christian; Klein, Tobias; Beer, Stefan; Zwick, Thomas
2013-12-01
The paper presents a Continuous-Wave(CW)-Radar sensor for high accuracy distance measurements in industrial applications. The usage of radar sensors in industrial scenarios has the advantage of a robust functionality in wet or dusty environments where optical systems reach their limits. This publication shows that accuracies of a few micro-meters are possible with millimeter-wave systems. In addition to distance measurement results the paper describes the sensor concept, the experimental set-up with the measurement process and possibilities to increase the accuracy even further.
Achieving accuracy in first-principles calculations at extreme temperature and pressure
NASA Astrophysics Data System (ADS)
Mattsson, Ann; Wills, John
2013-06-01
First-principles calculations are increasingly used to provide EOS data at pressures and temperatures where experimental data is difficult or impossible to obtain. The lack of experimental data, however, also precludes validation of the calculations in those regimes. Factors influencing the accuracy of first-principles data include theoretical approximations, and computational approximations used in implementing and solving the underlying equations. The first category includes approximate exchange-correlation functionals and wave equations simplifying the Dirac equation. In the second category are, e.g., basis completeness and pseudo-potentials. While the first category is extremely hard to assess without experimental data, inaccuracies of the second type should be well controlled. We are using two rather different electronic structure methods (VASP and RSPt) to make explicit the requirements for accuracy of the second type. We will discuss the VASP Projector Augmented Wave potentials, with examples for Li and Mo. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
A novel speckle pattern—Adaptive digital image correlation approach with robust strain calculation
NASA Astrophysics Data System (ADS)
Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim
2012-02-01
Digital image correlation (DIC) has seen widespread acceptance and usage as a non-contact method for the determination of full-field displacements and strains in experimental mechanics. The advances of imaging hardware in the last decades led to high resolution and speed cameras being more affordable than in the past making large amounts of data image available for typical DIC experimental scenarios. The work presented in this paper is aimed at maximizing both the accuracy and speed of DIC methods when employed with such images. A low-level framework for speckle image partitioning which replaces regularly shaped blocks with image-adaptive cells in the displacement calculation is introduced. The Newton-Raphson DIC method is modified to use the image pixels of the cells and to perform adaptive regularization to increase the spatial consistency of the displacements. Furthermore, a novel robust framework for strain calculation based also on the Newton-Raphson algorithm is introduced. The proposed methods are evaluated in five experimental scenarios, out of which four use numerically deformed images and one uses real experimental data. Results indicate that, as the desired strain density increases, significant computational gains can be obtained while maintaining or improving accuracy and rigid-body rotation sensitivity.
Conical-scan tracking with the 64-m-diameter antenna at goldstone
NASA Technical Reports Server (NTRS)
Ohlson, J. E.; Reid, M. S.
1976-01-01
The theory and experimental work which demonstrated the feasibility of conical-scan tracking with a 64 m diameter paraboloid antenna is documented. The purpose of this scheme is to actively track spacecraft and radio sources continuously with an accuracy superior to that obtained by manual correction of the computer driven pointing. The conical-scan implementation gives increased tracking accuracy with X-band spacecraft signals, as demonstrated in the Mariner Venus/Mercury 1973 mission. Also, the high accuracy and ease of measurement with conical-scan tracking allow evaluation of systematic and random antenna tracking errors.
Risto, Malte; Martens, Marieke H
2014-07-01
With specific headway instructions drivers are not able to attain the exact headways as instructed. In this study, the effects of discrete headway feedback (and the direction of headway adjustment) on headway accuracy for drivers carrying out time headway instructions were assessed experimentally. Two groups of each 10 participants (one receiving headway feedback; one control) carried out headway instructions in a driving simulator; increasing and decreasing their headway to a target headway of 2 s at speeds of 50, 80, and 100 km/h. The difference between the instructed and chosen headway was a measure for headway accuracy. The feedback group heard a sound signal at the moment that they crossed the distance of the instructed headway. Unsupported participants showed no significant difference in headway accuracy when increasing or decreasing headways. Discrete headway feedback had varying effects on headway choice accuracy. When participants decreased their headway, feedback led to higher accuracy. When increasing their headway, feedback led to a lower accuracy, compared to no headway feedback. Support did not affect driver's performance in maintaining the chosen headway. The present results suggest that (a) in its current form discrete headway feedback is not sufficient to improve the overall accuracy of chosen headways when carrying out headway instructions; (b) the effect of discrete headway feedback depends on the direction of headway adjustment. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Network dynamics of social influence in the wisdom of crowds
Brackbill, Devon; Centola, Damon
2017-01-01
A longstanding problem in the social, biological, and computational sciences is to determine how groups of distributed individuals can form intelligent collective judgments. Since Galton’s discovery of the “wisdom of crowds” [Galton F (1907) Nature 75:450–451], theories of collective intelligence have suggested that the accuracy of group judgments requires individuals to be either independent, with uncorrelated beliefs, or diverse, with negatively correlated beliefs [Page S (2008) The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies]. Previous experimental studies have supported this view by arguing that social influence undermines the wisdom of crowds. These results showed that individuals’ estimates became more similar when subjects observed each other’s beliefs, thereby reducing diversity without a corresponding increase in group accuracy [Lorenz J, Rauhut H, Schweitzer F, Helbing D (2011) Proc Natl Acad Sci USA 108:9020–9025]. By contrast, we show general network conditions under which social influence improves the accuracy of group estimates, even as individual beliefs become more similar. We present theoretical predictions and experimental results showing that, in decentralized communication networks, group estimates become reliably more accurate as a result of information exchange. We further show that the dynamics of group accuracy change with network structure. In centralized networks, where the influence of central individuals dominates the collective estimation process, group estimates become more likely to increase in error. PMID:28607070
Network dynamics of social influence in the wisdom of crowds.
Becker, Joshua; Brackbill, Devon; Centola, Damon
2017-06-27
A longstanding problem in the social, biological, and computational sciences is to determine how groups of distributed individuals can form intelligent collective judgments. Since Galton's discovery of the "wisdom of crowds" [Galton F (1907) Nature 75:450-451], theories of collective intelligence have suggested that the accuracy of group judgments requires individuals to be either independent, with uncorrelated beliefs, or diverse, with negatively correlated beliefs [Page S (2008) The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies ]. Previous experimental studies have supported this view by arguing that social influence undermines the wisdom of crowds. These results showed that individuals' estimates became more similar when subjects observed each other's beliefs, thereby reducing diversity without a corresponding increase in group accuracy [Lorenz J, Rauhut H, Schweitzer F, Helbing D (2011) Proc Natl Acad Sci USA 108:9020-9025]. By contrast, we show general network conditions under which social influence improves the accuracy of group estimates, even as individual beliefs become more similar. We present theoretical predictions and experimental results showing that, in decentralized communication networks, group estimates become reliably more accurate as a result of information exchange. We further show that the dynamics of group accuracy change with network structure. In centralized networks, where the influence of central individuals dominates the collective estimation process, group estimates become more likely to increase in error.
An experimental apparatus for diffraction-limited soft x-ray nano-focusing
NASA Astrophysics Data System (ADS)
Merthe, Daniel J.; Goldberg, Kenneth A.; Yashchuk, Valeriy V.; Yuan, Sheng; McKinney, Wayne R.; Celestre, Richard; Mochi, Iacopo; Macdougall, James; Morrison, Gregory Y.; Rakawa, Senajith B.; Anderson, Erik; Smith, Brian V.; Domning, Edward E.; Warwick, Tony; Padmore, Howard
2011-09-01
Realizing the experimental potential of high-brightness, next generation synchrotron and free-electron laser light sources requires the development of reflecting x-ray optics capable of wavefront preservation and high-resolution nano-focusing. At the Advanced Light Source (ALS) beamline 5.3.1, we are developing broadly applicable, high-accuracy, in situ, at-wavelength wavefront measurement techniques to surpass 100-nrad slope measurement accuracy for diffraction-limited Kirkpatrick-Baez (KB) mirrors. The at-wavelength methodology we are developing relies on a series of wavefront-sensing tests with increasing accuracy and sensitivity, including scanning-slit Hartmann tests, grating-based lateral shearing interferometry, and quantitative knife-edge testing. We describe the original experimental techniques and alignment methodology that have enabled us to optimally set a bendable KB mirror to achieve a focused, FWHM spot size of 150 nm, with 1 nm (1.24 keV) photons at 3.7 mrad numerical aperture. The predictions of wavefront measurement are confirmed by the knife-edge testing. The side-profiled elliptically bent mirror used in these one-dimensional focusing experiments was originally designed for a much different glancing angle and conjugate distances. Visible-light long-trace profilometry was used to pre-align the mirror before installation at the beamline. This work demonstrates that high-accuracy, at-wavelength wavefront-slope feedback can be used to optimize the pitch, roll, and mirror-bending forces in situ, using procedures that are deterministic and repeatable.
NASA Astrophysics Data System (ADS)
Li, Bang-Jian; Wang, Quan-Bao; Duan, Deng-Ping; Chen, Ji-An
2018-05-01
Intensity saturation can cause decorrelation phenomenon and decrease the measurement accuracy in digital image correlation (DIC). In the paper, the grey intensity adjustment strategy is proposed to improve the measurement accuracy of DIC considering the effect of intensity saturation. First, the grey intensity adjustment strategy is described in detail, which can recover the truncated grey intensities of the saturated pixels and reduce the decorrelation phenomenon. The simulated speckle patterns are then employed to demonstrate the efficacy of the proposed strategy, which indicates that the displacement accuracy can be improved by about 40% by the proposed strategy. Finally, the true experimental image is used to show the feasibility of the proposed strategy, which indicates that the displacement accuracy can be increased by about 10% by the proposed strategy.
A new fault diagnosis algorithm for AUV cooperative localization system
NASA Astrophysics Data System (ADS)
Shi, Hongyang; Miao, Zhiyong; Zhang, Yi
2017-10-01
Multiple AUVs cooperative localization as a new kind of underwater positioning technology, not only can improve the positioning accuracy, but also has many advantages the single AUV does not have. It is necessary to detect and isolate the fault to increase the reliability and availability of the AUVs cooperative localization system. In this paper, the Extended Multiple Model Adaptive Cubature Kalmam Filter (EMMACKF) method is presented to detect the fault. The sensor failures are simulated based on the off-line experimental data. Experimental results have shown that the faulty apparatus can be diagnosed effectively using the proposed method. Compared with Multiple Model Adaptive Extended Kalman Filter and Multi-Model Adaptive Unscented Kalman Filter, both accuracy and timelines have been improved to some extent.
Model Predictions and Observed Performance of JWST's Cryogenic Position Metrology System
NASA Technical Reports Server (NTRS)
Lunt, Sharon R.; Rhodes, David; DiAntonio, Andrew; Boland, John; Wells, Conrad; Gigliotti, Trevis; Johanning, Gary
2016-01-01
The James Webb Space Telescope cryogenic testing requires measurement systems that both obtain a very high degree of accuracy and can function in that environment. Close-range photogrammetry was identified as meeting those criteria. Testing the capability of a close-range photogrammetric system prior to its existence is a challenging problem. Computer simulation was chosen over building a scaled mock-up to allow for increased flexibility in testing various configurations. Extensive validation work was done to ensure that the actual as-built system meet accuracy and repeatability requirements. The simulated image data predicted the uncertainty in measurement to be within specification and this prediction was borne out experimentally. Uncertainty at all levels was verified experimentally to be less than 0.1 millimeters.
A Parametric Rosetta Energy Function Analysis with LK Peptides on SAM Surfaces.
Lubin, Joseph H; Pacella, Michael S; Gray, Jeffrey J
2018-05-08
Although structures have been determined for many soluble proteins and an increasing number of membrane proteins, experimental structure determination methods are limited for complexes of proteins and solid surfaces. An economical alternative or complement to experimental structure determination is molecular simulation. Rosetta is one software suite that models protein-surface interactions, but Rosetta is normally benchmarked on soluble proteins. For surface interactions, the validity of the energy function is uncertain because it is a combination of independent parameters from energy functions developed separately for solution proteins and mineral surfaces. Here, we assess the performance of the RosettaSurface algorithm and test the accuracy of its energy function by modeling the adsorption of leucine/lysine (LK)-repeat peptides on methyl- and carboxy-terminated self-assembled monolayers (SAMs). We investigated how RosettaSurface predictions for this system compare with the experimental results, which showed that on both surfaces, LK-α peptides folded into helices and LK-β peptides held extended structures. Utilizing this model system, we performed a parametric analysis of Rosetta's Talaris energy function and determined that adjusting solvation parameters offered improved predictive accuracy. Simultaneously increasing lysine carbon hydrophilicity and the hydrophobicity of the surface methyl head groups yielded computational predictions most closely matching the experimental results. De novo models still should be interpreted skeptically unless bolstered in an integrative approach with experimental data.
Social power facilitates the effect of prosocial orientation on empathic accuracy.
Côté, Stéphane; Kraus, Michael W; Cheng, Bonnie Hayden; Oveis, Christopher; van der Löwe, Ilmo; Lian, Hua; Keltner, Dacher
2011-08-01
Power increases the tendency to behave in a goal-congruent fashion. Guided by this theoretical notion, we hypothesized that elevated power would strengthen the positive association between prosocial orientation and empathic accuracy. In 3 studies with university and adult samples, prosocial orientation was more strongly associated with empathic accuracy when distinct forms of power were high than when power was low. In Study 1, a physiological indicator of prosocial orientation, respiratory sinus arrhythmia, exhibited a stronger positive association with empathic accuracy in a face-to-face interaction among dispositionally high-power individuals. In Study 2, experimentally induced prosocial orientation increased the ability to accurately judge the emotions of a stranger but only for individuals induced to feel powerful. In Study 3, a trait measure of prosocial orientation was more strongly related to scores on a standard test of empathic accuracy among employees who occupied high-power positions within an organization. Study 3 further showed a mediated relationship between prosocial orientation and career satisfaction through empathic accuracy among employees in high-power positions but not among employees in lower power positions. Discussion concentrates upon the implications of these findings for studies of prosociality, power, and social behavior.
NASA Astrophysics Data System (ADS)
Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki
2018-01-01
A method for improving accuracy in Wigner-Ville distribution (WVD)-based particle size measurements from inline holograms using flip and replication technique (FRT) is proposed. The FRT extends the length of hologram signals being analyzed, yielding better spatial-frequency resolution of the WVD output. Experimental results verify reduction in measurement error as the length of the hologram signals increases. The proposed method is suitable for particle sizing from holograms recorded using small-sized image sensors.
Velocity precision measurements using laser Doppler anemometry
NASA Astrophysics Data System (ADS)
Dopheide, D.; Taux, G.; Narjes, L.
1985-07-01
A Laser Doppler Anemometer (LDA) was calibrated to determine its applicability to high pressure measurements (up to 10 bars) for industrial purposes. The measurement procedure with LDA and the experimental computerized layouts are presented. The calibration procedure is based on absolute accuracy of Doppler frequency and calibration of interference strip intervals. A four-quadrant detector allows comparison of the interference strip distance measurements and computer profiles. Further development of LDA is recommended to increase accuracy (0.1% inaccuracy) and to apply the method industrially.
Buchner, Lena; Güntert, Peter
2015-02-03
Nuclear magnetic resonance (NMR) structures are represented by bundles of conformers calculated from different randomized initial structures using identical experimental input data. The spread among these conformers indicates the precision of the atomic coordinates. However, there is as yet no reliable measure of structural accuracy, i.e., how close NMR conformers are to the "true" structure. Instead, the precision of structure bundles is widely (mis)interpreted as a measure of structural quality. Attempts to increase precision often overestimate accuracy by tight bundles of high precision but much lower accuracy. To overcome this problem, we introduce a protocol for NMR structure determination with the software package CYANA, which produces, like the traditional method, bundles of conformers in agreement with a common set of conformational restraints but with a realistic precision that is, throughout a variety of proteins and NMR data sets, a much better estimate of structural accuracy than the precision of conventional structure bundles. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effect of recent popularity on heat-conduction based recommendation models
NASA Astrophysics Data System (ADS)
Li, Wen-Jun; Dong, Qiang; Shi, Yang-Bo; Fu, Yan; He, Jia-Lin
2017-05-01
Accuracy and diversity are two important measures in evaluating the performance of recommender systems. It has been demonstrated that the recommendation model inspired by the heat conduction process has high diversity yet low accuracy. Many variants have been introduced to improve the accuracy while keeping high diversity, most of which regard the current node-degree of an item as its popularity. However in this way, a few outdated items of large degree may be recommended to an enormous number of users. In this paper, we take the recent popularity (recently increased item degrees) into account in the heat-conduction based methods, and propose accordingly the improved recommendation models. Experimental results on two benchmark data sets show that the accuracy can be largely improved while keeping the high diversity compared with the original models.
Global Optimization Ensemble Model for Classification Methods
Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab
2014-01-01
Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382
Distributed memory parallel Markov random fields using graph partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinemann, C.; Perciano, T.; Ushizima, D.
Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less
Ecological risk assessors face increasing demands to assess more chemicals, with greater speed and accuracy, and to do so using fewer resources and experimental animals. New approaches in biological and computational sciences are being developed to generate mechanistic informatio...
Ecological risk assessors face increasing demands to assess more chemicals, with greater speed and accuracy, and to do so using fewer resources and experimental animals. New approaches in biological and computational sciences may be able to generate mechanistic information that ...
Yeates, Erin M; Molfenter, Sonja M; Steele, Catriona M
2008-01-01
Dysphagia, or difficulty swallowing, often occurs secondary to conditions such as stroke, head injury or progressive disease, many of which increase in frequency with advancing age. Sarcopenia, the gradual loss of muscle bulk and strength, can place older individuals at greater risk for dysphagia. Data are reported for three older participants in a pilot trial of a tongue-pressure training therapy. During the experimental therapy protocol, participants performed isometric strength exercises for the tongue as well as tongue pressure accuracy tasks. Biofeedback was provided using the Iowa Oral Performance Instrument (IOPI), an instrument that measures tongue pressure. Treatment outcome measures show increased isometric tongue strength, improved tongue pressure generation accuracy, improved bolus control on videofluoroscopy, and improved functional dietary intake by mouth. These preliminary results indicate that, for these three adults with dysphagia, tongue-pressure training was beneficial for improving both instrumental and functional aspects of swallowing. The experimental treatment protocol holds promise as a rehabilitative tool for various dysphagia populations.
Efficient and self-adaptive in-situ learning in multilayer memristor neural networks.
Li, Can; Belkin, Daniel; Li, Yunning; Yan, Peng; Hu, Miao; Ge, Ning; Jiang, Hao; Montgomery, Eric; Lin, Peng; Wang, Zhongrui; Song, Wenhao; Strachan, John Paul; Barnell, Mark; Wu, Qing; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei
2018-06-19
Memristors with tunable resistance states are emerging building blocks of artificial neural networks. However, in situ learning on a large-scale multiple-layer memristor network has yet to be demonstrated because of challenges in device property engineering and circuit integration. Here we monolithically integrate hafnium oxide-based memristors with a foundry-made transistor array into a multiple-layer neural network. We experimentally demonstrate in situ learning capability and achieve competitive classification accuracy on a standard machine learning dataset, which further confirms that the training algorithm allows the network to adapt to hardware imperfections. Our simulation using the experimental parameters suggests that a larger network would further increase the classification accuracy. The memristor neural network is a promising hardware platform for artificial intelligence with high speed-energy efficiency.
Research on effect of rough surface on FMCW laser radar range accuracy
NASA Astrophysics Data System (ADS)
Tao, Huirong
2018-03-01
The non-cooperative targets large scale measurement system based on frequency-modulated continuous-wave (FMCW) laser detection and ranging technology has broad application prospects. It is easy to automate measurement without cooperative targets. However, the complexity and diversity of the surface characteristics of the measured surface directly affects the measurement accuracy. First, the theoretical analysis of range accuracy for a FMCW laser radar was studied, the relationship between surface reflectivity and accuracy was obtained. Then, to verify the effect of surface reflectance for ranging accuracy, a standard tool ball and three standard roughness samples were measured within 7 m to 24 m. The uncertainty of each target was obtained. The results show that the measurement accuracy is found to increase as the surface reflectivity gets larger. Good agreements were obtained between theoretical analysis and measurements from rough surfaces. Otherwise, when the laser spot diameter is smaller than the surface correlation length, a multi-point averaged measurement can reduce the measurement uncertainty. The experimental results show that this method is feasible.
Design and Theoretical Analysis of a Resonant Sensor for Liquid Density Measurement
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m3. The results also confirm that the method to increase the accuracy of liquid density measurement is feasible. PMID:22969378
Design and theoretical analysis of a resonant sensor for liquid density measurement.
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m(3). The results also confirm that the method to increase the accuracy of liquid density measurement is feasible.
Gardner, Ian A; Whittington, Richard J; Caraguel, Charles G B; Hick, Paul; Moody, Nicholas J G; Corbeil, Serge; Garver, Kyle A.; Warg, Janet V.; Arzul, Isabelle; Purcell, Maureen; St. J. Crane, Mark; Waltzek, Thomas B.; Olesen, Niels J; Lagno, Alicia Gallardo
2016-01-01
Complete and transparent reporting of key elements of diagnostic accuracy studies for infectious diseases in cultured and wild aquatic animals benefits end-users of these tests, enabling the rational design of surveillance programs, the assessment of test results from clinical cases and comparisons of diagnostic test performance. Based on deficiencies in the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines identified in a prior finfish study (Gardner et al. 2014), we adapted the Standards for Reporting of Animal Diagnostic Accuracy Studies—paratuberculosis (STRADAS-paraTB) checklist of 25 reporting items to increase their relevance to finfish, amphibians, molluscs, and crustaceans and provided examples and explanations for each item. The checklist, known as STRADAS-aquatic, was developed and refined by an expert group of 14 transdisciplinary scientists with experience in test evaluation studies using field and experimental samples, in operation of reference laboratories for aquatic animal pathogens, and in development of international aquatic animal health policy. The main changes to the STRADAS-paraTB checklist were to nomenclature related to the species, the addition of guidelines for experimental challenge studies, and the designation of some items as relevant only to experimental studies and ante-mortem tests. We believe that adoption of these guidelines will improve reporting of primary studies of test accuracy for aquatic animal diseases and facilitate assessment of their fitness-for-purpose. Given the importance of diagnostic tests to underpin the Sanitary and Phytosanitary agreement of the World Trade Organization, the principles outlined in this paper should be applied to other World Organisation for Animal Health (OIE)-relevant species.
Gardner, Ian A; Whittington, Richard J; Caraguel, Charles G B; Hick, Paul; Moody, Nicholas J G; Corbeil, Serge; Garver, Kyle A; Warg, Janet V; Arzul, Isabelle; Purcell, Maureen K; Crane, Mark St J; Waltzek, Thomas B; Olesen, Niels J; Gallardo Lagno, Alicia
2016-02-25
Complete and transparent reporting of key elements of diagnostic accuracy studies for infectious diseases in cultured and wild aquatic animals benefits end-users of these tests, enabling the rational design of surveillance programs, the assessment of test results from clinical cases and comparisons of diagnostic test performance. Based on deficiencies in the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines identified in a prior finfish study (Gardner et al. 2014), we adapted the Standards for Reporting of Animal Diagnostic Accuracy Studies-paratuberculosis (STRADAS-paraTB) checklist of 25 reporting items to increase their relevance to finfish, amphibians, molluscs, and crustaceans and provided examples and explanations for each item. The checklist, known as STRADAS-aquatic, was developed and refined by an expert group of 14 transdisciplinary scientists with experience in test evaluation studies using field and experimental samples, in operation of reference laboratories for aquatic animal pathogens, and in development of international aquatic animal health policy. The main changes to the STRADAS-paraTB checklist were to nomenclature related to the species, the addition of guidelines for experimental challenge studies, and the designation of some items as relevant only to experimental studies and ante-mortem tests. We believe that adoption of these guidelines will improve reporting of primary studies of test accuracy for aquatic animal diseases and facilitate assessment of their fitness-for-purpose. Given the importance of diagnostic tests to underpin the Sanitary and Phytosanitary agreement of the World Trade Organization, the principles outlined in this paper should be applied to other World Organisation for Animal Health (OIE)-relevant species.
Prompt increase of ultrashort laser pulse transmission through thin silver films
NASA Astrophysics Data System (ADS)
Bezhanov, S. G.; Danilov, P. A.; Klekovkin, A. V.; Kudryashov, S. I.; Rudenko, A. A.; Uryupin, S. A.
2018-03-01
We study experimentally and numerically the increase in ultrashort laser pulse transmissivity through thin silver films caused by the heating of electrons. Low to moderate energy femtosecond laser pulse transmission measurements through 40-125 nm thickness silver films were carried out. We compare the experimental data with the values of transmitted fraction of energy obtained by solving the equations for the field together with the two-temperature model. The measured values were fitted with sufficient accuracy by varying the electron-electron collision frequency whose exact values are usually poorly known. Since transmissivity experiences more pronounced changes with the increase in temperature compared to reflectivity, we suggest this technique for studying the properties of nonequilibrium metals.
NASA Astrophysics Data System (ADS)
Hatzenbuhler, Chelsea; Kelly, John R.; Martinson, John; Okum, Sara; Pilgrim, Erik
2017-04-01
High-throughput DNA metabarcoding has gained recognition as a potentially powerful tool for biomonitoring, including early detection of aquatic invasive species (AIS). DNA based techniques are advancing, but our understanding of the limits to detection for metabarcoding complex samples is inadequate. For detecting AIS at an early stage of invasion when the species is rare, accuracy at low detection limits is key. To evaluate the utility of metabarcoding in future fish community monitoring programs, we conducted several experiments to determine the sensitivity and accuracy of routine metabarcoding methods. Experimental mixes used larval fish tissue from multiple “common” species spiked with varying proportions of tissue from an additional “rare” species. Pyrosequencing of genetic marker, COI (cytochrome c oxidase subunit I) and subsequent sequence data analysis provided experimental evidence of low-level detection of the target “rare” species at biomass percentages as low as 0.02% of total sample biomass. Limits to detection varied interspecifically and were susceptible to amplification bias. Moreover, results showed some data processing methods can skew sequence-based biodiversity measurements from corresponding relative biomass abundances and increase false absences. We suggest caution in interpreting presence/absence and relative abundance in larval fish assemblages until metabarcoding methods are optimized for accuracy and precision.
Design and experimental validation of novel 3D optical scanner with zoom lens unit
NASA Astrophysics Data System (ADS)
Huang, Jyun-Cheng; Liu, Chien-Sheng; Chiang, Pei-Ju; Hsu, Wei-Yan; Liu, Jian-Liang; Huang, Bai-Hao; Lin, Shao-Ru
2017-10-01
Optical scanners play a key role in many three-dimensional (3D) printing and CAD/CAM applications. However, existing optical scanners are generally designed to provide either a wide scanning area or a high 3D reconstruction accuracy from a lens with a fixed focal length. In the former case, the scanning area is increased at the expense of the reconstruction accuracy, while in the latter case, the reconstruction performance is improved at the expense of a more limited scanning range. In other words, existing optical scanners compromise between the scanning area and the reconstruction accuracy. Accordingly, the present study proposes a new scanning system including a zoom-lens unit, which combines both a wide scanning area and a high 3D reconstruction accuracy. In the proposed approach, the object is scanned initially under a suitable low-magnification setting for the object size (setting 1), resulting in a wide scanning area but a poor reconstruction resolution in complicated regions of the object. The complicated regions of the object are then rescanned under a high-magnification setting (setting 2) in order to improve the accuracy of the original reconstruction results. Finally, the models reconstructed after each scanning pass are combined to obtain the final reconstructed 3D shape of the object. The feasibility of the proposed method is demonstrated experimentally using a laboratory-built prototype. It is shown that the scanner has a high reconstruction accuracy over a large scanning area. In other words, the proposed optical scanner has significant potential for 3D engineering applications.
Experimental study of electrical discharge drilling of stainless steel UNS S30400
NASA Astrophysics Data System (ADS)
Hanash, E. A. H.; Ali, M. Y.
2018-01-01
In this study, overcut and taper angle were investigated in machining of stainless steel UNS S30400 against three different electrical discharge machining parameters which are electric current (Ip), pulse on-time (Ton) and pulse off-time (Toff). The electrode used was of 1 mm diameter with aspect ratio of 10. Dimensional accuracy was measured by evaluating overcut and taper angle. Those two measurements were performed using optical microscope model (Olympus BX41M, Japan). The experimentation planning, evaluation, analysis and optimization have been carried out using DOE software version 10.0.3 RSM based method with total number of twenty experiments. The research reveals that, discharge current was found to have the most significant effect on overcut and taper angle followed by pulse on-time and pulse off-time. As the discharge current and pulse on-time increase, overcut and taper angle are increased. However, when pulse off-time increases, overcut and taper angle decrease. The outcome result of this study will be very useful in the manufacturing industry to select the appropriate parameters for the selected work material. The model has shown a great accuracy with percentage error of less than 5%.
SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240
SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.
Youn, Seongwook
2014-01-01
Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.
Accuracy of a hexapod parallel robot kinematics based external fixator.
Faschingbauer, Maximilian; Heuer, Hinrich J D; Seide, Klaus; Wendlandt, Robert; Münch, Matthias; Jürgens, Christian; Kirchner, Rainer
2015-12-01
Different hexapod-based external fixators are increasingly used to treat bone deformities and fractures. Accuracy has not been measured sufficiently for all models. An infrared tracking system was applied to measure positioning maneuvers with a motorized Precision Hexapod® fixator, detecting three-dimensional positions of reflective balls mounted in an L-arrangement on the fixator, simulating bone directions. By omitting one dimension of the coordinates, projections were simulated as if measured on standard radiographs. Accuracy was calculated as the absolute difference between targeted and measured positioning values. In 149 positioning maneuvers, the median values for positioning accuracy of translations and rotations (torsions/angulations) were below 0.3 mm and 0.2° with quartiles ranging from -0.5 mm to 0.5 mm and -1.0° to 0.9°, respectively. The experimental setup was found to be precise and reliable. It can be applied to compare different hexapod-based fixators. Accuracy of the investigated hexapod system was high. Copyright © 2014 John Wiley & Sons, Ltd.
Experimental and casework validation of ambient temperature corrections in forensic entomology.
Johnson, Aidan P; Wallman, James F; Archer, Melanie S
2012-01-01
This paper expands on Archer (J Forensic Sci 49, 2004, 553), examining additional factors affecting ambient temperature correction of weather station data in forensic entomology. Sixteen hypothetical body discovery sites (BDSs) in Victoria and New South Wales (Australia), both in autumn and in summer, were compared to test whether the accuracy of correlation was affected by (i) length of correlation period; (ii) distance between BDS and weather station; and (iii) periodicity of ambient temperature measurements. The accuracy of correlations in data sets from real Victorian and NSW forensic entomology cases was also examined. Correlations increased weather data accuracy in all experiments, but significant differences in accuracy were found only between periodicity treatments. We found that a >5°C difference between average values of body in situ and correlation period weather station data was predictive of correlations that decreased the accuracy of ambient temperatures estimated using correlation. Practitioners should inspect their weather data sets for such differences. © 2011 American Academy of Forensic Sciences.
Tarrasch, Ricardo; Margalit-Shalom, Lilach; Berger, Rony
2017-01-01
The present study assessed the effects of the mindfulness/compassion cultivating program: “Call to Care-Israel” on the performance in visual perception (VP) and motor accuracy, as well as on anxiety levels and self-reported mindfulness among 4th and 5th grade students. One hundred and thirty-eight children participated in the program for 24 weekly sessions, while 78 children served as controls. Repeated measures ANOVA’s yielded significant interactions between time of measurement and group for VP, motor accuracy, reported mindfulness, and anxiety. Post hoc tests revealed significant improvements in the four aforementioned measures in the experimental group only. In addition, significant correlations were obtained between the improvement in motor accuracy and the reduction in anxiety and the increase in mindfulness. Since VP and motor accuracy are basic skills associated with quantifiable academic characteristics, such as reading and mathematical abilities, the results may suggest that mindfulness practice has the ability to improve academic achievements. PMID:28286492
Parent, Francois; Loranger, Sebastien; Mandal, Koushik Kanti; Iezzi, Victor Lambin; Lapointe, Jerome; Boisvert, Jean-Sébastien; Baiad, Mohamed Diaa; Kadoury, Samuel; Kashyap, Raman
2017-04-01
We demonstrate a novel approach to enhance the precision of surgical needle shape tracking based on distributed strain sensing using optical frequency domain reflectometry (OFDR). The precision enhancement is provided by using optical fibers with high scattering properties. Shape tracking of surgical tools using strain sensing properties of optical fibers has seen increased attention in recent years. Most of the investigations made in this field use fiber Bragg gratings (FBG), which can be used as discrete or quasi-distributed strain sensors. By using a truly distributed sensing approach (OFDR), preliminary results show that the attainable accuracy is comparable to accuracies reported in the literature using FBG sensors for tracking applications (~1mm). We propose a technique that enhanced our accuracy by 47% using UV exposed fibers, which have higher light scattering compared to un-exposed standard single mode fibers. Improving the experimental setup will enhance the accuracy provided by shape tracking using OFDR and will contribute significantly to clinical applications.
Geometrical accuracy improvement in flexible roll forming lines
NASA Astrophysics Data System (ADS)
Larrañaga, J.; Berner, S.; Galdos, L.; Groche, P.
2011-01-01
The general interest to produce profiles with variable cross section in a cost-effective way has increased in the last few years. The flexible roll forming process allows producing profiles with variable cross section lengthwise in a continuous way. Until now, only a few flexible roll forming lines were developed and built up. Apart from the flange wrinkling along the transition zone of u-profiles with variable cross section, the process limits have not been investigated and solutions for shape deviations are unknown. During the PROFOM project a flexible roll forming machine has been developed with the objective of producing high technological components for automotive body structures. In order to investigate the limits of the process, different profile geometries and steel grades including high strength steels have been applied. During the first experimental tests, several errors have been identified, as a result of the complex stress states generated during the forming process. In order to improve the accuracy of the target profiles and to meet the tolerance demands of the automotive industry, a thermo-mechanical solution has been proposed. Additional mechanical devices, supporting flexible the roll forming process, have been implemented in the roll forming line together with local heating techniques. The combination of both methods shows a significant increase of the accuracy. In the present investigation, the experimental results of the validation process are presented.
Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches
NASA Astrophysics Data System (ADS)
Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia
2017-10-01
With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.
Yeates, Erin M; Molfenter, Sonja M; Steele, Catriona M
2008-01-01
Dysphagia, or difficulty swallowing, often occurs secondary to conditions such as stroke, head injury or progressive disease, many of which increase in frequency with advancing age. Sarcopenia, the gradual loss of muscle bulk and strength, can place older individuals at greater risk for dysphagia. Data are reported for three older participants in a pilot trial of a tongue-pressure training therapy. During the experimental therapy protocol, participants performed isometric strength exercises for the tongue as well as tongue pressure accuracy tasks. Biofeedback was provided using the Iowa Oral Performance Instrument (IOPI), an instrument that measures tongue pressure. Treatment outcome measures show increased isometric tongue strength, improved tongue pressure generation accuracy, improved bolus control on videofluoroscopy, and improved functional dietary intake by mouth. These preliminary results indicate that, for these three adults with dysphagia, tongue-pressure training was beneficial for improving both instrumental and functional aspects of swallowing. The experimental treatment protocol holds promise as a rehabilitative tool for various dysphagia populations. PMID:19281066
Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy
NASA Astrophysics Data System (ADS)
Fu, Juan; Chen, Xiaoqian; Huang, Yiyong
2013-12-01
It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.
Wu, Xiaoping; Akgün, Can; Vaughan, J Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François
2010-07-01
Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B+1) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate (SAR) in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system.
Farrell, Jordan S.; Palmer, Laura A.; Singleton, Anna C.; Pittman, Quentin J.; Teskey, G. Campbell
2016-01-01
Key points The present study tested whether HCN channels contribute to the organization of motor cortex and to skilled motor behaviour during a forelimb reaching task.Experimental reductions in HCN channel signalling increase the representation of complex multiple forelimb movements in motor cortex as assessed by intracortical microstimulation.Global HCN1KO mice exhibit reduced reaching accuracy and atypical movements during a single‐pellet reaching task relative to wild‐type controls.Acute pharmacological inhibition of HCN channels in forelimb motor cortex decreases reaching accuracy and increases atypical movements during forelimb reaching. Abstract The mechanisms by which distinct movements of a forelimb are generated from the same area of motor cortex have remained elusive. Here we examined a role for HCN channels, given their ability to alter synaptic integration, in the expression of forelimb movement responses during intracortical microstimulation (ICMS) and movements of the forelimb on a skilled reaching task. We used short‐duration high‐resolution ICMS to evoke forelimb movements following pharmacological (ZD7288), experimental (electrically induced cortical seizures) or genetic approaches that we confirmed with whole‐cell patch clamp to substantially reduce I h current. We observed significant increases in the number of multiple movement responses evoked at single sites in motor maps to all three experimental manipulations in rats or mice. Global HCN1 knockout mice were less successful and exhibited atypical movements on a skilled‐motor learning task relative to wild‐type controls. Furthermore, in reaching‐proficient rats, reaching accuracy was reduced and forelimb movements were altered during infusion of ZD7288 within motor cortex. Thus, HCN channels play a critical role in the separation of overlapping movement responses and allow for successful reaching behaviours. These data provide a novel mechanism for the encoding of multiple movement responses within shared networks of motor cortex. This mechanism supports a viewpoint of primary motor cortex as a site of dynamic integration for behavioural output. PMID:27568501
Information filtering via biased heat conduction.
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment
NASA Astrophysics Data System (ADS)
Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin
2018-04-01
Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.
Quasi solution of radiation transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
There is uncertainty with experimental data as well as with input data of theoretical calculations. The neutron distribution from the variational principle, which takes into account both theoretical and experimental data, is obtained to increase the accuracy and speed of neutronic calculations. The neutron imbalance in mesh cells and the discrepancy between experimentally measured and calculated functional of the neutron distribution are simultaneously minimized. A fast-working and simple-programming iteration method is developed to minimize the objective functional. The method can be used in the core monitoring and control system for (a) power distribution calculations, (b) in- and ex-core detector calibration,more » (c) macro-cross sections or isotope distribution correction by experimental data, and (d) core and detector diagnostics.« less
Yan, Min; Takahashi, Hidekazu; Nishimura, Fumio
2004-12-01
The aim of the present study was to evaluate the dimensional accuracy and surface property of titanium casting obtained using a gypsum-bonded alumina investment. The experimental gypsum-bonded alumina investment with 20 mass% gypsum content mixed with 2 mass% potassium sulfate was used for five cp titanium castings and three Cu-Zn alloy castings. The accuracy, surface roughness (Ra), and reaction layer thickness of these castings were investigated. The accuracy of the castings obtained from the experimental investment ranged from -0.04 to 0.23%, while surface roughness (Ra) ranged from 7.6 to 10.3microm. A reaction layer of about 150 microm thickness under the titanium casting surface was observed. These results suggested that the titanium casting obtained using the experimental investment was acceptable. Although the reaction layer was thin, surface roughness should be improved.
Processing of high-precision ceramic balls with a spiral V-groove plate
NASA Astrophysics Data System (ADS)
Feng, Ming; Wu, Yongbo; Yuan, Julong; Ping, Zhao
2017-03-01
As the demand for high-performance bearings gradually increases, ceramic balls with excellent properties, such as high accuracy, high reliability, and high chemical durability used, are extensively used for highperformance bearings. In this study, a spiral V-groove plate method is employed in processing high-precision ceramic balls. After the kinematic analysis of the ball-spin angle and enveloped lapping trajectories, an experimental rig is constructed and experiments are conducted to confirm the feasibility of this method. Kinematic analysis results indicate that the method not only allows for the control of the ball-spin angle but also uniformly distributes the enveloped lapping trajectories over the entire ball surface. Experimental results demonstrate that the novel spiral Vgroove plate method performs better than the conventional concentric V-groove plate method in terms of roundness, surface roughness, diameter difference, and diameter decrease rate. Ceramic balls with a G3-level accuracy are achieved, and their typical roundness, minimum surface roughness, and diameter difference are 0.05, 0.0045, and 0.105 μm, respectively. These findings confirm that the proposed method can be applied to high-accuracy and high-consistency ceramic ball processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Cui, Mingjian; Hodge, Bri-Mathias
The large variability and uncertainty in wind power generation present a concern to power system operators, especially given the increasing amounts of wind power being integrated into the electric power system. Large ramps, one of the biggest concerns, can significantly influence system economics and reliability. The Wind Forecast Improvement Project (WFIP) was to improve the accuracy of forecasts and to evaluate the economic benefits of these improvements to grid operators. This paper evaluates the ramp forecasting accuracy gained by improving the performance of short-term wind power forecasting. This study focuses on the WFIP southern study region, which encompasses most ofmore » the Electric Reliability Council of Texas (ERCOT) territory, to compare the experimental WFIP forecasts to the existing short-term wind power forecasts (used at ERCOT) at multiple spatial and temporal scales. The study employs four significant wind power ramping definitions according to the power change magnitude, direction, and duration. The optimized swinging door algorithm is adopted to extract ramp events from actual and forecasted wind power time series. The results show that the experimental WFIP forecasts improve the accuracy of the wind power ramp forecasting. This improvement can result in substantial costs savings and power system reliability enhancements.« less
Electrocutaneous stimulation system for Braille reading.
Echenique, Ana Maria; Graffigna, Juan Pablo; Mut, Vicente
2010-01-01
This work is an assistive technology for people with visual disabilities and aims to facilitate access to written information in order to achieve better social inclusion and integration into work and educational activities. Two methods of electrical stimulation (by current and voltage) of the mechanoreceptors was tested to obtain tactile sensations on the fingertip. Current and voltage stimulation were tested in a Braille cell and line prototype, respectively. These prototypes are evaluated in 33 blind and visually impaired subjects. The result of experimentation with both methods showed that electrical stimulation causes sensations of touch defined in the fingertip. Better results in the Braille characters reading were obtained with current stimulation (85% accuracy). However this form of stimulation causes uncomfortable sensations. The latter feeling was minimized with the method of voltage stimulation, but with low efficiency (50% accuracy) in terms of identification of the characters. We concluded that electrical stimulation is a promising method for the development of a simple and unexpensive Braille reading system for blind people. We observed that voltage stimulation is preferred by the users. However, more experimental tests must be carry out in order to find the optimum values of the stimulus parameters and increase the accuracy the Braille characters reading.
Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus
2018-06-01
Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.
Calibration method for a large-scale structured light measurement system.
Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken
2017-05-10
The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio
2011-04-01
To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.
Information filtering via biased heat conduction
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou , Proc. Natl. Acad. Sci. USA PNASA60027-842410.1073/pnas.1000488107107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
NASA Astrophysics Data System (ADS)
Suzuki, Makoto; Kameda, Toshimasa; Doi, Ayumi; Borisov, Sergey; Babin, Sergey
2018-03-01
The interpretation of scanning electron microscopy (SEM) images of the latest semiconductor devices is not intuitive and requires comparison with computed images based on theoretical modeling and simulations. For quantitative image prediction and geometrical reconstruction of the specimen structure, the accuracy of the physical model is essential. In this paper, we review the current models of electron-solid interaction and discuss their accuracy. We perform the comparison of the simulated results with our experiments of SEM overlay of under-layer, grain imaging of copper interconnect, and hole bottom visualization by angular selective detectors, and show that our model well reproduces the experimental results. Remaining issues for quantitative simulation are also discussed, including the accuracy of the charge dynamics, treatment of beam skirt, and explosive increase in computing time.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.
SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306
Negative Correlations in Visual Cortical Networks
Chelaru, Mircea I.; Dragoi, Valentin
2016-01-01
The amount of information encoded by cortical circuits depends critically on the capacity of nearby neurons to exhibit trial-to-trial (noise) correlations in their responses. Depending on their sign and relationship to signal correlations, noise correlations can either increase or decrease the population code accuracy relative to uncorrelated neuronal firing. Whereas positive noise correlations have been extensively studied using experimental and theoretical tools, the functional role of negative correlations in cortical circuits has remained elusive. We addressed this issue by performing multiple-electrode recording in the superficial layers of the primary visual cortex (V1) of alert monkey. Despite the fact that positive noise correlations decayed exponentially with the difference in the orientation preference between cells, negative correlations were uniformly distributed across the population. Using a statistical model for Fisher Information estimation, we found that a mild increase in negative correlations causes a sharp increase in network accuracy even when mean correlations were held constant. To examine the variables controlling the strength of negative correlations, we implemented a recurrent spiking network model of V1. We found that increasing local inhibition and reducing excitation causes a decrease in the firing rates of neurons while increasing the negative noise correlations, which in turn increase the population signal-to-noise ratio and network accuracy. Altogether, these results contribute to our understanding of the neuronal mechanism involved in the generation of negative correlations and their beneficial impact on cortical circuit function. PMID:25217468
Heterogeneity: The key to failure forecasting
Vasseur, Jérémie; Wadsworth, Fabian B.; Lavallée, Yan; Bell, Andrew F.; Main, Ian G.; Dingwell, Donald B.
2015-01-01
Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power. PMID:26307196
Heterogeneity: The key to failure forecasting.
Vasseur, Jérémie; Wadsworth, Fabian B; Lavallée, Yan; Bell, Andrew F; Main, Ian G; Dingwell, Donald B
2015-08-26
Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power.
Simulating Memory Impairment for Child Sexual Abuse.
Newton, Jeremy W; Hobbs, Sue D
2015-08-01
The current study investigated effects of simulated memory impairment on recall of child sexual abuse (CSA) information. A total of 144 adults were tested for memory of a written CSA scenario in which they role-played as the victim. There were four experimental groups and two testing sessions. During Session 1, participants read a CSA story and recalled it truthfully (Genuine group), omitted CSA information (Omission group), exaggerated CSA information (Commission group), or did not recall the story at all (No Rehearsal group). One week later, at Session 2, all participants were told to recount the scenario truthfully, and their memory was then tested using free recall and cued recall questions. The Session 1 manipulation affected memory accuracy during Session 2. Specifically, compared with the Genuine group's performance, the Omission, Commission, or No Rehearsal groups' performance was characterized by increased omission and commission errors and decreased reporting of correct details. Victim blame ratings (i.e., victim responsibility and provocativeness) and participant gender predicted increased error and decreased accuracy, whereas perpetrator blame ratings predicted decreased error and increased accuracy. Findings are discussed in relation to factors that may affect memory for CSA information. Copyright © 2015 John Wiley & Sons, Ltd.
Heterogeneity: The key to failure forecasting
NASA Astrophysics Data System (ADS)
Vasseur, Jérémie; Wadsworth, Fabian B.; Lavallée, Yan; Bell, Andrew F.; Main, Ian G.; Dingwell, Donald B.
2015-08-01
Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power.
NASA Astrophysics Data System (ADS)
Soltani, Omid; Akbari, Mohammad
2016-10-01
In this paper, the effects of temperature and particles concentration on the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluid is examined. The experiments carried out in the solid volume fraction range of 0 to 1.0% under the temperature ranging from 30 °C to 60 °C. The results showed that the hybrid nanofluid behaves as a Newtonian fluid for all solid volume fractions and temperatures considered. The measurements also indicated that the dynamic viscosity increases with increasing the solid volume fraction and decreases with the temperature rising. The relative viscosity revealed that when the solid volume fraction enhances from 0.1 to 1%, the dynamic viscosity increases up to 168%. Finally, using experimental data, in order to predict the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluids, a new correlation has been suggested. The comparisons between the correlation outputs and experimental results showed that the suggested correlation has an acceptable accuracy.
1980-12-01
experimental approach . Brit. J. Educ. Psychol. 37, 1967, 81-98 -I EYSENCK, H.J.: Personality and attainment: an application of psychological principles to...and away from written testing. Accordingly, a large portion of US Army testing employs the hands-on approach to testing. 3The Hands-On Test (HOT) is...increase the amount of job- related information utilized by ratets in appraisals. All three approaches are aimed at increasing accuracy by reducing
Improving CNN Performance Accuracies With Min-Max Objective.
Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning
2017-06-09
We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Adaptive classifier for steel strip surface defects
NASA Astrophysics Data System (ADS)
Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li
2017-01-01
Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.
Stress enhanced calcium kinetics in a neuron.
Kant, Aayush; Bhandakkar, Tanmay K; Medhekar, Nikhil V
2018-02-01
Accurate modeling of the mechanobiological response of a Traumatic Brain Injury is beneficial toward its effective clinical examination, treatment and prevention. Here, we present a stress history-dependent non-spatial kinetic model to predict the microscale phenomena of secondary insults due to accumulation of excess calcium ions (Ca[Formula: see text]) induced by the macroscale primary injuries. The model is able to capture the experimentally observed increase and subsequent partial recovery of intracellular Ca[Formula: see text] concentration in response to various types of mechanical impulses. We further establish the accuracy of the model by comparing our predictions with key experimental observations.
Discharge reliability in ablative pulsed plasma thrusters
NASA Astrophysics Data System (ADS)
Wu, Zhiwen; Sun, Guorui; Yuan, Shiyue; Huang, Tiankun; Liu, Xiangyang; Xie, Kan; Wang, Ningfei
2017-08-01
Discharge reliability is typically neglected in low-ignition-cycle ablative pulsed plasma thrusters (APPTs). In this study, the discharge reliability of an APPT is assessed analytically and experimentally. The goals of this study are to better understand the ignition characteristics and to assess the accuracy of the analytical method. For each of six sets of operating conditions, 500 tests of a parallel-plate APPT with a coaxial semiconductor spark plug are conducted. The discharge voltage and current are measured with a high-voltage probe and a Rogowski coil, respectively, to determine whether the discharge is successful. Generally, the discharge success rate increases as the discharge voltage increases, and it decreases as the electrode gap and the number of ignitions increases. The theoretical analysis and the experimental results are reasonably consistent. This approach provides a reference for designing APPTs and improving their stability.
Waide, Emily H; Tuggle, Christopher K; Serão, Nick V L; Schroyen, Martine; Hess, Andrew; Rowland, Raymond R R; Lunney, Joan K; Plastow, Graham; Dekkers, Jack C M
2018-02-01
Genomic prediction of the pig's response to the porcine reproductive and respiratory syndrome (PRRS) virus (PRRSV) would be a useful tool in the swine industry. This study investigated the accuracy of genomic prediction based on porcine SNP60 Beadchip data using training and validation datasets from populations with different genetic backgrounds that were challenged with different PRRSV isolates. Genomic prediction accuracy averaged 0.34 for viral load (VL) and 0.23 for weight gain (WG) following experimental PRRSV challenge, which demonstrates that genomic selection could be used to improve response to PRRSV infection. Training on WG data during infection with a less virulent PRRSV, KS06, resulted in poor accuracy of prediction for WG during infection with a more virulent PRRSV, NVSL. Inclusion of single nucleotide polymorphisms (SNPs) that are in linkage disequilibrium with a major quantitative trait locus (QTL) on chromosome 4 was vital for accurate prediction of VL. Overall, SNPs that were significantly associated with either trait in single SNP genome-wide association analysis were unable to predict the phenotypes with an accuracy as high as that obtained by using all genotyped SNPs across the genome. Inclusion of data from close relatives into the training population increased whole genome prediction accuracy by 33% for VL and by 37% for WG but did not affect the accuracy of prediction when using only SNPs in the major QTL region. Results show that genomic prediction of response to PRRSV infection is moderately accurate and, when using all SNPs on the porcine SNP60 Beadchip, is not very sensitive to differences in virulence of the PRRSV in training and validation populations. Including close relatives in the training population increased prediction accuracy when using the whole genome or SNPs other than those near a major QTL.
Cognitive benefit and cost of acute stress is differentially modulated by individual brain state
Hermans, Erno J.; Fernández, Guillén
2017-01-01
Abstract Acute stress is associated with beneficial as well as detrimental effects on cognition in different individuals. However, it is not yet known how stress can have such opposing effects. Stroop-like tasks typically show this dissociation: stress diminishes speed, but improves accuracy. We investigated accuracy and speed during a stroop-like task of 120 healthy male subjects after an experimental stress induction or control condition in a randomized, counter-balanced cross-over design; we assessed brain–behavior associations and determined the influence of individual brain connectivity patterns on these associations, which may moderate the effect and help identify stress resilience factors. In the mean, stress was associated to increase in accuracy, but decrease in speed. Accuracy was associated to brain activation in a distributed set of brain regions overlapping with the executive control network (ECN) and speed to temporo-parietal activation. In line with a stress-related large-scale network reconfiguration, individuals showing an upregulation of the salience and down-regulation of the executive-control network under stress displayed increased speed, but decreased performance. In contrast, individuals who upregulate their ECN under stress show improved performance. Our results indicate that the individual large-scale brain network balance under acute stress moderates cognitive consequences of threat. PMID:28402480
Cognitive benefit and cost of acute stress is differentially modulated by individual brain state.
Kohn, Nils; Hermans, Erno J; Fernández, Guillén
2017-07-01
Acute stress is associated with beneficial as well as detrimental effects on cognition in different individuals. However, it is not yet known how stress can have such opposing effects. Stroop-like tasks typically show this dissociation: stress diminishes speed, but improves accuracy. We investigated accuracy and speed during a stroop-like task of 120 healthy male subjects after an experimental stress induction or control condition in a randomized, counter-balanced cross-over design; we assessed brain-behavior associations and determined the influence of individual brain connectivity patterns on these associations, which may moderate the effect and help identify stress resilience factors. In the mean, stress was associated to increase in accuracy, but decrease in speed. Accuracy was associated to brain activation in a distributed set of brain regions overlapping with the executive control network (ECN) and speed to temporo-parietal activation. In line with a stress-related large-scale network reconfiguration, individuals showing an upregulation of the salience and down-regulation of the executive-control network under stress displayed increased speed, but decreased performance. In contrast, individuals who upregulate their ECN under stress show improved performance. Our results indicate that the individual large-scale brain network balance under acute stress moderates cognitive consequences of threat. © The Author (2017). Published by Oxford University Press.
Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101
Design of all-weather celestial navigation system
NASA Astrophysics Data System (ADS)
Sun, Hongchi; Mu, Rongjun; Du, Huajun; Wu, Peng
2018-03-01
In order to realize autonomous navigation in the atmosphere, an all-weather celestial navigation system is designed. The research of celestial navigation system include discrimination method of comentropy and the adaptive navigation algorithm based on the P value. The discrimination method of comentropy is studied to realize the independent switching of two celestial navigation modes, starlight and radio. Finally, an adaptive filtering algorithm based on P value is proposed, which can greatly improve the disturbance rejection capability of the system. The experimental results show that the accuracy of the three axis attitude is better than 10″, and it can work all weather. In perturbation environment, the position accuracy of the integrated navigation system can be increased 20% comparing with the traditional method. It basically meets the requirements of the all-weather celestial navigation system, and it has the ability of stability, reliability, high accuracy and strong anti-interference.
Balog, Julia; Perenyi, Dora; Guallar-Hoyas, Cristina; Egri, Attila; Pringle, Steven D; Stead, Sara; Chevallier, Olivier P; Elliott, Chris T; Takats, Zoltan
2016-06-15
Increasingly abundant food fraud cases have brought food authenticity and safety into major focus. This study presents a fast and effective way to identify meat products using rapid evaporative ionization mass spectrometry (REIMS). The experimental setup was demonstrated to be able to record a mass spectrometric profile of meat specimens in a time frame of <5 s. A multivariate statistical algorithm was developed and successfully tested for the identification of animal tissue with different anatomical origin, breed, and species with 100% accuracy at species and 97% accuracy at breed level. Detection of the presence of meat originating from a different species (horse, cattle, and venison) has also been demonstrated with high accuracy using mixed patties with a 5% detection limit. REIMS technology was found to be a promising tool in food safety applications providing a reliable and simple method for the rapid characterization of food products.
NASA Astrophysics Data System (ADS)
Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.
2015-09-01
This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.
Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M
2015-09-07
This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.
NASA Astrophysics Data System (ADS)
Hemmat Esfe, Mohammad; Tatar, Afshin; Ahangar, Mohammad Reza Hassani; Rostamian, Hossein
2018-02-01
Since the conventional thermal fluids such as water, oil, and ethylene glycol have poor thermal properties, the tiny solid particles are added to these fluids to increase their heat transfer improvement. As viscosity determines the rheological behavior of a fluid, studying the parameters affecting the viscosity is crucial. Since the experimental measurement of viscosity is expensive and time consuming, predicting this parameter is the apt method. In this work, three artificial intelligence methods containing Genetic Algorithm-Radial Basis Function Neural Networks (GA-RBF), Least Square Support Vector Machine (LS-SVM) and Gene Expression Programming (GEP) were applied to predict the viscosity of TiO2/SAE 50 nano-lubricant with Non-Newtonian power-law behavior using experimental data. The correlation factor (R2), Average Absolute Relative Deviation (AARD), Root Mean Square Error (RMSE), and Margin of Deviation were employed to investigate the accuracy of the proposed models. RMSE values of 0.58, 1.28, and 6.59 and R2 values of 0.99998, 0.99991, and 0.99777 reveal the accuracy of the proposed models for respective GA-RBF, CSA-LSSVM, and GEP methods. Among the developed models, the GA-RBF shows the best accuracy.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
Ontology Matching with Semantic Verification.
Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R
2009-09-01
ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.
Navigator Accuracy Requirements for Prospective Motion Correction
Maclaren, Julian; Speck, Oliver; Stucht, Daniel; Schulze, Peter; Hennig, Jürgen; Zaitsev, Maxim
2010-01-01
Prospective motion correction in MR imaging is becoming increasingly popular to prevent the image artefacts that result from subject motion. Navigator information is used to update the position of the imaging volume before every spin excitation so that lines of acquired k-space data are consistent. Errors in the navigator information, however, result in residual errors in each k-space line. This paper presents an analysis linking noise in the tracking system to the power of the resulting image artefacts. An expression is formulated for the required navigator accuracy based on the properties of the imaged object and the desired resolution. Analytical results are compared with computer simulations and experimental data. PMID:19918892
ERIC Educational Resources Information Center
Wright, Richard A.
2013-01-01
The purpose of this research was to investigate the effects of virtual reality training on the development of cognitive memory and handgun accuracy by law enforcement neophytes. One hundred and six academy students from 6 different academy classes were divided into two groups, experimental and control. The experimental group was exposed to virtual…
ERIC Educational Resources Information Center
Gutierrez de Blume, Antonio P.
2017-01-01
This study investigated the influence of strategy training instruction and an extrinsic incentive on American fourth- and fifth-grade students' (N = 35) performance, confidence in performance, and calibration accuracy. Using an experimental design, children were randomized to either an experimental group (strategy training and an extrinsic…
Whalen, D. H.; Zunshine, Lisa; Holquist, Michael
2015-01-01
Reading fiction is a major component of intellectual life, yet it has proven difficult to study experimentally. One aspect of literature that has recently come to light is perspective embedding (“she thought I left” embedding her perspective on “I left”), which seems to be a defining feature of fiction. Previous work (Whalen et al., 2012) has shown that increasing levels of embedment affects the time that it takes readers to read and understand short vignettes in a moving window paradigm. With increasing levels of embedment from 1 to 5, reading times in a moving window paradigm rose almost linearly. However, level 0 was as slow as 3–4. Accuracy on probe questions was relatively constant until dropping at the fifth level. Here, we assessed this effect in a more ecologically valid (“typical”) reading paradigm, in which the entire vignette was visible at once, either for as long as desired (Experiment 1) or a fixed time (Experiment 2). In Experiment 1, reading times followed a pattern similar to that of the previous experiment, with some differences in absolute speed. Accuracy matched previous results: fairly consistent accuracy until a decline at level 5, indicating that both presentation methods allowed understanding. In Experiment 2, accuracy was somewhat reduced, perhaps because participants were less successful at allocating their attention than they were during the earlier experiment; however, the pattern was the same. It seems that literature does not, on average, use easiest reading level but rather uses a middle ground that challenges the reader, but not too much. PMID:26635684
Internal and External Imagery Effects on Tennis Skills Among Novices.
Dana, Amir; Gozalzadeh, Elmira
2017-10-01
The purpose of this study was to determine the effects of internal and external visual imagery perspectives on performance accuracy of open and closed tennis skills (i.e., serve, forehand, and backhand) among novices. Thirty-six young male novices, aged 15-18 years, from a summer tennis program participated. Following initial skill acquisition (12 sessions), baseline assessments of imagery ability and imagery perspective preference were used to assign participants to one of three groups: internal imagery ( n = 12), external imagery ( n = 12), or a no-imagery (mental math exercise) control group ( n = 12). The experimental interventions of 15 minutes of mental imagery (internal or external) or mental math exercises followed by 15 minutes of physical practice were held three times a week for six weeks. The performance accuracy of the groups on the serve, forehand, and backhand strokes was measured at pre- and post-test using videotaping. Results showed significant increases in the performance accuracy of all three tennis strokes in all three groups, but serve accuracy in the internal imagery group and forehand accuracy in the external imagery group showed greater improvements, while backhand accuracy was similarly improved in all three groups. These findings highlight differential efficacy of internal and external visual imagery for performance improvement on complex sport skills in early stage motor learning.
NASA Astrophysics Data System (ADS)
Rozhaeva, K.
2018-01-01
The aim of the researchis the quality operations of the design process at the stage of research works on the development of active on-Board system of the launch vehicles spent stages descent with liquid propellant rocket engines by simulating the gasification process of undeveloped residues of fuel in the tanks. The design techniques of the gasification process of liquid rocket propellant components residues in the tank to the expense of finding and fixing errors in the algorithm calculation to increase the accuracy of calculation results is proposed. Experimental modelling of the model liquid evaporation in a limited reservoir of the experimental stand, allowing due to the false measurements rejection based on given criteria and detected faults to enhance the results reliability of the experimental studies; to reduce the experiments cost.
Comparisons Between Experimental and Semi-theoretical Cutting Forces of CCS Disc Cutters
NASA Astrophysics Data System (ADS)
Xia, Yimin; Guo, Ben; Tan, Qing; Zhang, Xuhui; Lan, Hao; Ji, Zhiyong
2018-05-01
This paper focuses on comparisons between the experimental and semi-theoretical forces of CCS disc cutters acting on different rocks. The experimental forces obtained from LCM tests were used to evaluate the prediction accuracy of a semi-theoretical CSM model. The results show that the CSM model reliably predicts the normal forces acting on red sandstone and granite, but underestimates the normal forces acting on marble. Some additional LCM test data from the literature were collected to further explore the ability of the CSM model to predict the normal forces acting on rocks of different strengths. The CSM model underestimates the normal forces acting on soft rocks, semi-hard rocks and hard rocks by approximately 38, 38 and 10%, respectively, but very accurately predicts those acting on very hard and extremely hard rocks. A calibration factor is introduced to modify the normal forces estimated by the CSM model. The overall trend of the calibration factor is characterized by an exponential decrease with increasing rock uniaxial compressive strength. The mean fitting ratios between the normal forces estimated by the modified CSM model and the experimental normal forces acting on soft rocks, semi-hard rocks and hard rocks are 1.076, 0.879 and 1.013, respectively. The results indicate that the prediction accuracy and the reliability of the CSM model have been improved.
Madi, Mahmoud K; Karameh, Fadi N
2018-05-11
Many physical models of biological processes including neural systems are characterized by parametric nonlinear dynamical relations between driving inputs, internal states, and measured outputs of the process. Fitting such models using experimental data (data assimilation) is a challenging task since the physical process often operates in a noisy, possibly non-stationary environment; moreover, conducting multiple experiments under controlled and repeatable conditions can be impractical, time consuming or costly. The accuracy of model identification, therefore, is dictated principally by the quality and dynamic richness of collected data over single or few experimental sessions. Accordingly, it is highly desirable to design efficient experiments that, by exciting the physical process with smart inputs, yields fast convergence and increased accuracy of the model. We herein introduce an adaptive framework in which optimal input design is integrated with Square root Cubature Kalman Filters (OID-SCKF) to develop an online estimation procedure that first, converges significantly quicker, thereby permitting model fitting over shorter time windows, and second, enhances model accuracy when only few process outputs are accessible. The methodology is demonstrated on common nonlinear models and on a four-area neural mass model with noisy and limited measurements. Estimation quality (speed and accuracy) is benchmarked against high-performance SCKF-based methods that commonly employ dynamically rich informed inputs for accurate model identification. For all the tested models, simulated single-trial and ensemble averages showed that OID-SCKF exhibited (i) faster convergence of parameter estimates and (ii) lower dependence on inter-trial noise variability with gains up to around 1000 msec in speed and 81% increase in variability for the neural mass models. In terms of accuracy, OID-SCKF estimation was superior, and exhibited considerably less variability across experiments, in identifying model parameters of (a) systems with challenging model inversion dynamics and (b) systems with fewer measurable outputs that directly relate to the underlying processes. Fast and accurate identification therefore carries particular promise for modeling of transient (short-lived) neuronal network dynamics using a spatially under-sampled set of noisy measurements, as is commonly encountered in neural engineering applications. © 2018 IOP Publishing Ltd.
Li, Wei; Liu, Jian Guo; Zhu, Ning Hua
2015-04-15
We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
2017-06-21
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
The effects of implicit encouragement and the putative confession on children's memory reports.
Cleveland, Kyndra C; Quas, Jodi A; Lyon, Thomas D
2018-06-01
The current study tested the effects of two interview techniques on children's report productivity and accuracy following exposure to suggestion: implicit encouragement (backchanneling, use of children's names) and the putative confession (telling children that a suspect "told me everything that happened and wants you to tell the truth"). One hundred and forty-three, 3-8-year-old children participated in a classroom event. One week later, they took part in a highly suggestive conversation about the event and then a mock forensic interview in which the two techniques were experimentally manipulated. Greater use of implicit encouragement led to increases, with age, in children's narrative productivity. Neither technique improved or reduced children's accuracy. No increases in errors about previously suggested information were evident when children received either technique. Implications for the use of these techniques in child forensic interviews are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Experimental verification of an interpolation algorithm for improved estimates of animal position
NASA Astrophysics Data System (ADS)
Schell, Chad; Jaffe, Jules S.
2004-07-01
This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.
Sampling factors influencing accuracy of sperm kinematic analysis.
Owen, D H; Katz, D F
1993-01-01
Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings thus suggest that computer-aided sperm analysis (CASA) application at 60 frames per second will significantly improve the accuracy of kinematic analysis in most applications to human and other mammalian sperm.
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.
2011-01-01
The speed–accuracy trade-off (SAT) is ubiquitous in decision tasks. While the neural mechanisms underlying decisions are generally well characterized, the application of decision-theoretic methods to the SAT has been difficult to reconcile with experimental data suggesting that decision thresholds are inflexible. Using a network model of a cortical decision circuit, we demonstrate the SAT in a manner consistent with neural and behavioral data and with mathematical models that optimize speed and accuracy with respect to one another. In simulations of a reaction time task, we modulate the gain of the network with a signal encoding the urgency to respond. As the urgency signal builds up, the network progresses through a series of processing stages supporting noise filtering, integration of evidence, amplification of integrated evidence, and choice selection. Analysis of the network's dynamics formally characterizes this progression. Slower buildup of urgency increases accuracy by slowing down the progression. Faster buildup has the opposite effect. Because the network always progresses through the same stages, decision-selective firing rates are stereotyped at decision time. PMID:21415911
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C
2011-01-01
The speed-accuracy trade-off (SAT) is ubiquitous in decision tasks. While the neural mechanisms underlying decisions are generally well characterized, the application of decision-theoretic methods to the SAT has been difficult to reconcile with experimental data suggesting that decision thresholds are inflexible. Using a network model of a cortical decision circuit, we demonstrate the SAT in a manner consistent with neural and behavioral data and with mathematical models that optimize speed and accuracy with respect to one another. In simulations of a reaction time task, we modulate the gain of the network with a signal encoding the urgency to respond. As the urgency signal builds up, the network progresses through a series of processing stages supporting noise filtering, integration of evidence, amplification of integrated evidence, and choice selection. Analysis of the network's dynamics formally characterizes this progression. Slower buildup of urgency increases accuracy by slowing down the progression. Faster buildup has the opposite effect. Because the network always progresses through the same stages, decision-selective firing rates are stereotyped at decision time.
Morphological Awareness and Children's Writing: Accuracy, Error, and Invention
McCutchen, Deborah; Stull, Sara
2014-01-01
This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade U.S. students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in spelling and errors of the sort we termed morphological inventions, which entailed inappropriate, novel pairings of stems and suffixes. Regressions were used to determine the relationship between morphological awareness, morphological accuracy, and spelling accuracy, as well as between morphological awareness and morphological inventions. Linear regressions revealed that morphological awareness uniquely predicted children's generation of accurate morphological derivations, regardless of whether or not accurate spelling was required. A logistic regression indicated that morphological awareness was also uniquely predictive of morphological invention, with higher morphological awareness increasing the probability of morphological invention. These findings suggest that morphological knowledge may not only assist children with spelling during writing, but may also assist with word production via generative experimentation with morphological rules during sentence generation. Implications are discussed for the development of children's morphological knowledge and relationships with writing. PMID:25663748
NASA Astrophysics Data System (ADS)
Löw, Fabian; Schorcht, Gunther; Michel, Ulrich; Dech, Stefan; Conrad, Christopher
2012-10-01
Accurate crop identification and crop area estimation are important for studies on irrigated agricultural systems, yield and water demand modeling, and agrarian policy development. In this study a novel combination of Random Forest (RF) and Support Vector Machine (SVM) classifiers is presented that (i) enhances crop classification accuracy and (ii) provides spatial information on map uncertainty. The methodology was implemented over four distinct irrigated sites in Middle Asia using RapidEye time series data. The RF feature importance statistics was used as feature-selection strategy for the SVM to assess possible negative effects on classification accuracy caused by an oversized feature space. The results of the individual RF and SVM classifications were combined with rules based on posterior classification probability and estimates of classification probability entropy. SVM classification performance was increased by feature selection through RF. Further experimental results indicate that the hybrid classifier improves overall classification accuracy in comparison to the single classifiers as well as useŕs and produceŕs accuracy.
Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness
NASA Astrophysics Data System (ADS)
Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng
2018-05-01
Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.
Thermal Conductivity Measurement of Anisotropic Biological Tissue In Vitro
NASA Astrophysics Data System (ADS)
Yue, Kai; Cheng, Liang; Yang, Lina; Jin, Bitao; Zhang, Xinxin
2017-06-01
The accurate determination of the thermal conductivity of biological tissues has implications on the success of cryosurgical/hyperthermia treatments. In light of the evident anisotropy in some biological tissues, a new modified stepwise transient method was proposed to simultaneously measure the transverse and longitudinal thermal conductivities of anisotropic biological tissues. The physical and mathematical models were established, and the analytical solution was derived. Sensitivity analysis and experimental simulation were performed to determine the feasibility and measurement accuracy of simultaneously measuring the transverse and longitudinal thermal conductivities. The experimental system was set up, and its measurement accuracy was verified by measuring the thermal conductivity of a reference standard material. The thermal conductivities of the pork tenderloin and bovine muscles were measured using the traditional 1D and proposed methods, respectively, at different temperatures. Results indicate that the thermal conductivities of the bovine muscle are lower than those of the pork tenderloin muscle, whereas the bovine muscle was determined to exhibit stronger anisotropy than the pork tenderloin muscle. Moreover, the longitudinal thermal conductivity is larger than the transverse thermal conductivity for the two tissues and all thermal conductivities increase with the increase in temperature. Compared with the traditional 1D method, results obtained by the proposed method are slightly higher although the relative deviation is below 5 %.
Pairwise graphical models for structural health monitoring with dense sensor arrays
NASA Astrophysics Data System (ADS)
Mohammadi Ghazi, Reza; Chen, Justin G.; Büyüköztürk, Oral
2017-09-01
Through advances in sensor technology and development of camera-based measurement techniques, it has become affordable to obtain high spatial resolution data from structures. Although measured datasets become more informative by increasing the number of sensors, the spatial dependencies between sensor data are increased at the same time. Therefore, appropriate data analysis techniques are needed to handle the inference problem in presence of these dependencies. In this paper, we propose a novel approach that uses graphical models (GM) for considering the spatial dependencies between sensor measurements in dense sensor networks or arrays to improve damage localization accuracy in structural health monitoring (SHM) application. Because there are always unobserved damaged states in this application, the available information is insufficient for learning the GMs. To overcome this challenge, we propose an approximated model that uses the mutual information between sensor measurements to learn the GMs. The study is backed by experimental validation of the method on two test structures. The first is a three-story two-bay steel model structure that is instrumented by MEMS accelerometers. The second experimental setup consists of a plate structure and a video camera to measure the displacement field of the plate. Our results show that considering the spatial dependencies by the proposed algorithm can significantly improve damage localization accuracy.
Naik, Ganesh R; Kumar, Dinesh K; Arjunan, Sridhar
2009-01-01
This paper has experimentally verified and compared features of sEMG (Surface Electromyogram) such as ICA (Independent Component Analysis) and Fractal Dimension (FD) for identification of low level forearm muscle activities. The fractal dimension was used as a feature as reported in the literature. The normalized feature values were used as training and testing vectors for an Artificial neural network (ANN), in order to reduce inter-experimental variations. The identification accuracy using FD of four channels sEMG was 58%, and increased to 96% when the signals are separated to their independent components using ICA.
CD-Based Indices for Link Prediction in Complex Network.
Wang, Tao; Wang, Hongjue; Wang, Xiaoxia
2016-01-01
Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks.
Development of at-wavelength metrology for x-ray optics at the ALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, Valeriy V.; Goldberg, Kenneth A.; Yuan, Sheng
2010-07-09
The comprehensive realization of the exciting advantages of new third- and forth-generation synchrotron radiation light sources requires concomitant development of reflecting and diffractive x-ray optics capable of micro- and nano-focusing, brightness preservation, and super high resolution. The fabrication, tuning, and alignment of the optics are impossible without adequate metrology instrumentation, methods, and techniques. While the accuracy of ex situ optical metrology at the Advanced Light Source (ALS) has reached a state-of-the-art level, wavefront control on beamlines is often limited by environmental and systematic alignment factors, and inadequate in situ feedback. At ALS beamline 5.3.1, we are developing broadly applicable, high-accuracy,more » in situ, at-wavelength wavefront measurement techniques to surpass 100-nrad slope measurement accuracy for Kirkpatrick-Baez (KB) mirrors. The at-wavelength methodology we are developing relies on a series of tests with increasing accuracy and sensitivity. Geometric Hartmann tests, performed with a scanning illuminated sub-aperture determine the wavefront slope across the full mirror aperture. Shearing interferometry techniques use coherent illumination and provide higher sensitivity wavefront measurements. Combining these techniques with high precision optical metrology and experimental methods will enable us to provide in situ setting and alignment of bendable x-ray optics to realize diffraction-limited, sub 50 nm focusing at beamlines. We describe here details of the metrology beamline endstation, the x-ray beam diagnostic system, and original experimental techniques that have already allowed us to precisely set a bendable KB mirror to achieve a focused spot size of 150 nm.« less
CD-Based Indices for Link Prediction in Complex Network
Wang, Tao; Wang, Hongjue; Wang, Xiaoxia
2016-01-01
Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks. PMID:26752405
de Saint Laumer, Jean‐Yves; Leocata, Sabine; Tissot, Emeline; Baroux, Lucie; Kampf, David M.; Merle, Philippe; Boschung, Alain; Seyfried, Markus
2015-01-01
We previously showed that the relative response factors of volatile compounds were predictable from either combustion enthalpies or their molecular formulae only 1. We now extend this prediction to silylated derivatives by adding an increment in the ab initio calculation of combustion enthalpies. The accuracy of the experimental relative response factors database was also improved and its population increased to 490 values. In particular, more brominated compounds were measured, and their prediction accuracy was improved by adding a correction factor in the algorithm. The correlation coefficient between predicted and measured values increased from 0.936 to 0.972, leading to a mean prediction accuracy of ± 6%. Thus, 93% of the relative response factors values were predicted with an accuracy of better than ± 10%. The capabilities of the extended algorithm are exemplified by (i) the quick and accurate quantification of hydroxylated metabolites resulting from a biodegradation test after silylation and prediction of their relative response factors, without having the reference substances available; and (ii) the rapid purity determinations of volatile compounds. This study confirms that Gas chromatography with a flame ionization detector and using predicted relative response factors is one of the few techniques that enables quantification of volatile compounds without calibrating the instrument with the pure reference substance. PMID:26179324
Schoemans, H; Goris, K; Durm, R V; Vanhoof, J; Wolff, D; Greinix, H; Pavletic, S; Lee, S J; Maertens, J; Geest, S D; Dobbels, F; Duarte, R F
2016-08-01
The EBMT Complications and Quality of Life Working Party has developed a computer-based algorithm, the 'eGVHD App', using a user-centered design process. Accuracy was tested using a quasi-experimental crossover design with four expert-reviewed case vignettes in a convenience sample of 28 clinical professionals. Perceived usefulness was evaluated by the technology acceptance model (TAM) and User satisfaction by the Post-Study System Usability Questionnaire (PSSUQ). User experience was positive, with a median of 6 TAM points (interquartile range: 1) and beneficial median total, and subscale PSSUQ scores. The initial standard practice assessment of the vignettes yielded 65% correct results for diagnosis and 45% for scoring. The 'eGVHD App' significantly increased diagnostic and scoring accuracy to 93% (+28%) and 88% (+43%), respectively (both P<0.05). The same trend was observed in the repeated analysis of case 2: accuracy improved by using the App (+31% for diagnosis and +39% for scoring), whereas performance tended to decrease once the App was taken away. The 'eGVHD App' could dramatically improve the quality of care and research as it increased the performance of the whole user group by about 30% at the first assessment and showed a trend for improvement of individual performance on repeated case evaluation.
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.
Meterological correction of optical beam refraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukin, V.P.; Melamud, A.E.; Mironov, V.L.
1986-02-01
At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less
A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Horst; Laurischkat, Roman; Zhu Junhong
One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less
Vocal Accuracy and Neural Plasticity Following Micromelody-Discrimination Training
Zarate, Jean Mary; Delhommeau, Karine; Wood, Sean; Zatorre, Robert J.
2010-01-01
Background Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy. Methodology/Principal Findings We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing. Conclusions/Significance Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production. PMID:20567521
Modelling mono-digestion of grass silage in a 2-stage CSTR anaerobic digester using ADM1.
Thamsiriroj, T; Murphy, J D
2011-01-01
This paper examines 174 days of experimental data and modelling of mono-digestion of grass silage in a two stage wet process with recirculation of liquor; the two vessels have an effective volume of 312 L each. The organic loading rate is initiated at 0.5 kg VS m(-3) d(-1) (first 74 days) and subsequently increased to 1 kg VS m(-3) d(-1). The experimental data was used to generate a mathematical model (ADM1) which was calibrated over the first 74 days of operation. Good accuracy with experimental data was found for the subsequent 100 days. Results of the model would suggest starting the process without recirculation and thus building up the solids content of the liquor. As the level of VFA increases, recirculation should be employed to control VFA. Recirculation also controls solids content and pH. Methane production was estimated at 88% of maximum theoretical production. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanley, B.J.; Guiochon, G.
1994-11-01
Adsorption energy distributions (AEDs) are calculated from the classical, fundamental integral equation of adsorption using adsorption isotherms and the expectation-maximization method of parameter estimation. The adsorption isotherms are calculated from nonlinear elution profiles obtained from gas chromatographic data using the characteristic points method of finite concentration chromatography. Porous layer open tubular capillary columns are used to support the adsorbent. The performance of these columns is compared to that of packed columns in terms of their ability to supply accurate isotherm data and AEDs. The effect of the finite column efficiency and the limited loading factor on the accuracy of themore » estimated energy distributions is presented. This accuracy decreases with decreasing efficiency, and approximately 5000 theoretical plates are needed when the loading factor, L[sub f], equals 0.56 for sampling of a unimodal Gaussian distribution. Increasing L[sub f] further increases the contribution of finite efficiency to the AED and causes a divergence at the low-energy endpoint if too high. This occurs as the retention time approaches the holdup time. Data are presented for diethyl ether adsorption on porous silica and its C-18-bonded derivative. 36 refs., 8 figs., 2 tabs.« less
Pavlov, Michael Y; Ehrenberg, Måns
2018-05-20
Accurate translation of genetic information is crucial for synthesis of functional proteins in all organisms. We use recent experimental data to discuss how induced fit affects accuracy of initial codon selection on the ribosome by aminoacyl transfer RNA in ternary complex ( T 3 ) with elongation factor Tu (EF-Tu) and guanosine-5'-triphosphate (GTP). We define actual accuracy ([Formula: see text]) of a particular protein synthesis system as its current accuracy and the effective selectivity ([Formula: see text]) as [Formula: see text] in the limit of zero ribosomal binding affinity for T 3 . Intrinsic selectivity ([Formula: see text]), defined as the upper thermodynamic limit of [Formula: see text], is determined by the free energy difference between near-cognate and cognate T 3 in the pre-GTP hydrolysis state on the ribosome. [Formula: see text] is much larger than [Formula: see text], suggesting the possibility of a considerable increase in [Formula: see text] and [Formula: see text] at negligible kinetic cost. Induced fit increases [Formula: see text] and [Formula: see text] without affecting [Formula: see text], and aminoglycoside antibiotics reduce [Formula: see text] and [Formula: see text] at unaltered [Formula: see text].
Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240
Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Daily modulation of the speed-accuracy trade-off.
Gueugneau, Nicolas; Pozzo, Thierry; Darlot, Christian; Papaxanthis, Charalambos
2017-07-25
Goal-oriented arm movements are characterized by a balance between speed and accuracy. The relation between speed and accuracy has been formalized by Fitts' law and predicts a linear increase in movement duration with task constraints. Up to now this relation has been investigated on a short-time scale only, that is during a single experimental session, although chronobiological studies report that the motor system is shaped by circadian rhythms. Here, we examine whether the speed-accuracy trade-off could vary during the day. Healthy adults carried out arm-pointing movements as accurately and fast as possible toward targets of different sizes at various hours of the day, and variations in Fitts' law parameters were scrutinized. To investigate whether the potential modulation of the speed-accuracy trade-off has peripheral and/or central origins, a motor imagery paradigm was used as well. Results indicated a daily (circadian-like) variation for the durations of both executed and mentally simulated movements, in strictly controlled accuracy conditions. While Fitts' law was held for the whole sessions of the day, the slope of the relation between movement duration and task difficulty expressed a clear modulation, with the lowest values in the afternoon. This variation of the speed-accuracy trade-off in executed and mental movements suggests that, beyond execution parameters, motor planning mechanisms are modulated during the day. Daily update of forward models is discussed as a potential mechanism. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Accuracy of Conventional and Digital Radiography in Detecting External Root Resorption
Mesgarani, Abbas; Haghanifar, Sina; Ehsani, Maryam; Yaghub, Samereh Dokhte; Bijani, Ali
2014-01-01
Introduction: External root resorption (ERR) is associated with physiological and pathological dissolution of mineralized tissues by clastic cells and radiography is one of the most important methods in its diagnosis. The aim of this experimental study was to evaluate the accuracy of conventional intraoral radiography (CR) in comparison with digital radiographic techniques, i.e. charge-coupled device (CCD) and photo-stimulable phosphor (PSP) sensors, in detection of ERR. Methods and Materials: This study was performed on 80 extracted human mandibular premolars. After taking separate initial periapical radiographs with CR technique, CCD and PSP sensors, the artificial defects resembling ERR with variable sizes were created in apical half of the mesial, distal and buccal surfaces of the teeth. Ten teeth were used as control samples without any resorption. The radiographs were then repeated with 2 different exposure times and the images were observed by 3 observers. Data were analyzed using SPSS version 17 and chi-squared and Cohen’s Kappa tests with 95% confidence interval (CI=95%). Result: The CCD had the highest percentage of correct assessment compared to the CR and PSP sensors, although the difference was not significant (P=0.39). It was shown that the higher dosage of radiation increases the accuracy of diagnosis; however, it was only significant for CCD sensor (P=0.02). Also, the accuracy of diagnosis increased with the increase in the size of lesion (P=0.001). Conclusion: Statistically significant difference was not observed for accurate detection of ERR by conventional and digital radiographic techniques. PMID:25386202
Slezak, Diego Fernandez; Sigman, Mariano
2012-08-01
The time spent making a decision and its quality define a widely studied trade-off. Some models suggest that the time spent is set to optimize reward, as verified empirically in simple-decision making experiments. However, in a more complex perspective compromising components of regulation focus, ambitions, fear, risk and social variables, adjustment of the speed-accuracy trade-off may not be optimal. Specifically, regulatory focus theory shows that people can be set in a promotion mode, where focus is on seeking to approach a desired state (to win), or in a prevention mode, focusing to avoid undesired states (not to lose). In promotion, people are eager to take risks increasing speed and decreasing accuracy. In prevention, strategic vigilance increases, decreasing speed and improving accuracy. When time and accuracy have to be compromised, one can ask which of these 2 strategies optimizes reward, leading to optimal performance. This is investigated here in a unique experimental environment. Decision making is studied in rapid-chess (180 s per game), in which the goal of a player is to mate the opponent in a finite amount of time or, alternatively, time-out of the opponent with sufficient material to mate. In different games, players face strong and weak opponents. It was observed that (a) players adopt a more conservative strategy when facing strong opponents, with slower and more accurate moves, and (b) this strategy is suboptimal: Players increase their winning likelihood against strong opponents using the policy they adopt when confronting opponents with similar strength. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias
2016-03-01
Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.
Flat-plate techniques for measuring reflectance of macro-algae (Ulva curvata)
Ramsey, Elijah W.; Rangoonwala, Amina; Thomsen, Mads Solgaard; Schwarzschild, Arthur
2012-01-01
We tested the consistency and accuracy of flat-plate spectral measurements (400–1000 nm) of the marine macrophyte Ulva curvata. With sequential addition of Ulva thallus layers, the reflectance progressively increased from 6% to 9% with six thalli in the visible (VIS) and from 5% to 19% with ten thalli in the near infrared (NIR). This progressive increase was simulated by a mathematical calculation based on an Ulva thallus diffuse reflectance weighted by a transmittance power series. Experimental and simulated reflectance differences that were particularly high in the NIR most likely resulted from residual water and layering structure unevenness in the experimental progression. High spectral overlap existed between fouled and non-fouled Ulva mats and the coexistent lagoon mud in the VIS, whereas in the NIR, spectral contrast was retained but substantially dampened by fouling.
Unifying Speed-Accuracy Trade-Off and Cost-Benefit Trade-Off in Human Reaching Movements.
Peternel, Luka; Sigaud, Olivier; Babič, Jan
2017-01-01
Two basic trade-offs interact while our brain decides how to move our body. First, with the cost-benefit trade-off, the brain trades between the importance of moving faster toward a target that is more rewarding and the increased muscular cost resulting from a faster movement. Second, with the speed-accuracy trade-off, the brain trades between how accurate the movement needs to be and the time it takes to achieve such accuracy. So far, these two trade-offs have been well studied in isolation, despite their obvious interdependence. To overcome this limitation, we propose a new model that is able to simultaneously account for both trade-offs. The model assumes that the central nervous system maximizes the expected utility resulting from the potential reward and the cost over the repetition of many movements, taking into account the probability to miss the target. The resulting model is able to account for both the speed-accuracy and the cost-benefit trade-offs. To validate the proposed hypothesis, we confront the properties of the computational model to data from an experimental study where subjects have to reach for targets by performing arm movements in a horizontal plane. The results qualitatively show that the proposed model successfully accounts for both cost-benefit and speed-accuracy trade-offs.
Brookes, Emre; Cao, Weiming; Demeler, Borries
2010-02-01
We report a model-independent analysis approach for fitting sedimentation velocity data which permits simultaneous determination of shape and molecular weight distributions for mono- and polydisperse solutions of macromolecules. Our approach allows for heterogeneity in the frictional domain, providing a more faithful description of the experimental data for cases where frictional ratios are not identical for all components. Because of increased accuracy in the frictional properties of each component, our method also provides more reliable molecular weight distributions in the general case. The method is based on a fine grained two-dimensional grid search over s and f/f (0), where the grid is a linear combination of whole boundary models represented by finite element solutions of the Lamm equation with sedimentation and diffusion parameters corresponding to the grid points. A Monte Carlo approach is used to characterize confidence limits for the determined solutes. Computational algorithms addressing the very large memory needs for a fine grained search are discussed. The method is suitable for globally fitting multi-speed experiments, and constraints based on prior knowledge about the experimental system can be imposed. Time- and radially invariant noise can be eliminated. Serial and parallel implementations of the method are presented. We demonstrate with simulated and experimental data of known composition that our method provides superior accuracy and lower variance fits to experimental data compared to other methods in use today, and show that it can be used to identify modes of aggregation and slow polymerization.
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Zhu, Qian; Zhou, Changquan; Qiao, Jiacheng; Zhang, Lu
2018-06-01
Phase-shifting fringe projection profilometry (PSFPP) is a three-dimensional (3D) measurement technique widely adopted in industry measurement. It recovers the 3D profile of measured objects with the aid of the fringe phase. The phase accuracy is among the dominant factors that determine the 3D measurement accuracy. Evaluation of the phase accuracy helps refine adjustable measurement parameters, contributes to evaluating the 3D measurement accuracy, and facilitates improvement of the measurement accuracy. Although PSFPP has been deeply researched, an effective, easy-to-use phase accuracy evaluation method remains to be explored. In this paper, methods based on the uniform-phase coded image (UCI) are presented to accomplish phase accuracy evaluation for PSFPP. These methods work on the principle that the phase value of a UCI can be manually set to be any value, and once the phase value of a UCI pixel is the same as that of a pixel of a corresponding sinusoidal fringe pattern, their phase accuracy values are approximate. The proposed methods provide feasible approaches to evaluating the phase accuracy for PSFPP. Furthermore, they can be used to experimentally research the property of the random and gamma phase errors in PSFPP without the aid of a mathematical model to express random phase error or a large-step phase-shifting algorithm. In this paper, some novel and interesting phenomena are experimentally uncovered with the aid of the proposed methods.
NASA Technical Reports Server (NTRS)
Salas, Manuel D.
2007-01-01
The research program of the aerodynamics, aerothermodynamics and plasmadynamics discipline of NASA's Hypersonic Project is reviewed. Details are provided for each of its three components: 1) development of physics-based models of non-equilibrium chemistry, surface catalytic effects, turbulence, transition and radiation; 2) development of advanced simulation tools to enable increased spatial and time accuracy, increased geometrical complexity, grid adaptation, increased physical-processes complexity, uncertainty quantification and error control; and 3) establishment of experimental databases from ground and flight experiments to develop better understanding of high-speed flows and to provide data to validate and guide the development of simulation tools.
NASA Astrophysics Data System (ADS)
Ito, Yukihiro; Natsu, Wataru; Kunieda, Masanori
This paper describes the influences of anisotropy found in the elastic modulus of monocrystalline silicon wafers on the measurement accuracy of the three-point-support inverting method which can measure the warp and thickness of thin large panels simultaneously. Deflection due to gravity depends on the crystal orientation relative to the positions of the three-point-supports. Thus the deviation of actual crystal orientation from the direction indicated by the notch fabricated on the wafer causes measurement errors. Numerical analysis of the deflection confirmed that the uncertainty of thickness measurement increases from 0.168µm to 0.524µm due to this measurement error. In addition, experimental results showed that the rotation of crystal orientation relative to the three-point-supports is effective for preventing wafer vibration excited by disturbance vibration because the resonance frequency of wafers can be changed. Thus, surface shape measurement accuracy was improved by preventing resonant vibration during measurement.
Implementation of a close range photogrammetric system for 3D reconstruction of a scoliotic torso
NASA Astrophysics Data System (ADS)
Detchev, Ivan Denislavov
Scoliosis is a deformity of the human spine most commonly encountered with children. After being detected, periodic examinations via x-rays are traditionally used to measure its progression. However, due to the increased risk of cancer, a non-invasive and radiation-free scoliosis detection and progression monitoring methodology is needed. Quantifying the scoliotic deformity through the torso surface is a valid alternative, because of its high correlation with the internal spine curvature. This work proposes a low-cost multi-camera photogrammetric system for semi-automated 3D reconstruction of a torso surface with sub-millimetre level accuracy. The thesis describes the system design and calibration for optimal accuracy. It also covers the methodology behind the reconstruction and registration procedures. The experimental results include the complete reconstruction of a scoliotic torso mannequin. The final accuracy is evaluated through the goodness of fit between the reconstructed surface and a more accurate set of points measured by a coordinate measuring machine.
[Discussion of scattering in THz time domain spectrum tests].
Yan, Fang; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Li, Zhi; Zhang, Han
2014-06-01
Using THz-TDS to extract the absorption spectrum of a sample is an important branch of various THz applications. Basically, we believe that the THz radiation scatters from sample particles, leading to an obvious baseline increasing with frequencies in its absorption spectrum. The baseline will affect the measurement accuracy due to ambiguous height and pattern of the spectrum. The authors should try to remove the baseline, and eliminate the effects of scattering. In the present paper, we investigated the causes of baselines, reviewed some of scatter mitigating methods and summarized some of research aspects in the future. In order to validate the correctness of these methods, we designed a series of experiments to compare the computational accuracy of molar concentration. The result indicated that the computational accuracy of molar concentration can be improved, which can be the basis of quantitative analysis in further researches. Finally, with comprehensive experimental results, we presented further research directions on THz absorption spectrum that is needed for the removal of scattering effects.
Study on high-precision measurement of long radius of curvature
NASA Astrophysics Data System (ADS)
Wu, Dongcheng; Peng, Shijun; Gao, Songtao
2016-09-01
It is hard to get high-precision measurement of the radius of curvature (ROC), because of many factors that affect the measurement accuracy. For the measurement of long radius of curvature, some factors take more important position than others'. So, at first this paper makes some research about which factor is related to the long measurement distance, and also analyse the uncertain of the measurement accuracy. At second this article also study the influence about the support status and the adjust error about the cat's eye and confocal position. At last, a 1055micrometer radius of curvature convex is measured in high-precision laboratory. Experimental results show that the proper steady support (three-point support) can guarantee the high-precision measurement of radius of curvature. Through calibrating the gain of cat's eye and confocal position, is useful to ensure the precise position in order to increase the measurement accuracy. After finish all the above process, the high-precision long ROC measurement is realized.
g-Factor of heavy ions: a new access to the fine structure constant.
Shabaev, V M; Glazov, D A; Oreshkina, N S; Volotka, A V; Plunien, G; Kluge, H-J; Quint, W
2006-06-30
A possibility for a determination of the fine structure constant in experiments on the bound-electron g-factor is examined. It is found that studying a specific difference of the g-factors of B- and H-like ions of the same spinless isotope in the Pb region to the currently accessible experimental accuracy of 7 x 10(-10) would lead to a determination of the fine structure constant to an accuracy which is better than that of the currently accepted value. Further improvements of the experimental and theoretical accuracy could provide a value of the fine structure constant which is several times more precise than the currently accepted one.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-04-26
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-01-01
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668
Experimental confirmation of the atomic force microscope cantilever stiffness tilt correction
NASA Astrophysics Data System (ADS)
Gates, Richard S.
2017-12-01
The tilt angle (angle of repose) of an AFM cantilever relative to the surface it is interrogating affects the effective stiffness of the cantilever as it analyzes the surface. For typical AFMs and cantilevers that incline from 10° to 15° tilt, this is thought to be a 3%-7% stiffness increase correction. While the theoretical geometric analysis of this effect may have reached a consensus that it varies with cos-2 θ, there is very little experimental evidence to confirm this using AFM cantilevers. Recently, the laser Doppler vibrometry thermal calibration method utilized at NIST has demonstrated sufficient stiffness calibration accuracy, and precision to allow a definitive experimental confirmation of the particular trigonometric form of this tilt effect using a commercial microfabricated AFM cantilever specially modified to allow strongly tilted (up to 15°) effective cantilever stiffness measurements.
Culture and Probability Judgment Accuracy: The Influence of Holistic Reasoning.
Lechuga, Julia; Wiebe, John S
2011-08-01
A well-established phenomenon in the judgment and decision-making tradition is the overconfidence one places in the amount of knowledge that one possesses. Overconfidence or probability judgment accuracy varies not only individually but also across cultures. However, research efforts to explain cross-cultural variations in the overconfidence phenomenon have seldom been made. In Study 1, the authors compared the probability judgment accuracy of U.S. Americans (N = 108) and Mexican participants (N = 100). In Study 2, they experimentally primed culture by randomly assigning English/Spanish bilingual Mexican Americans (N = 195) to response language. Results of both studies replicated the cross-cultural variation of probability judgment accuracy previously observed in other cultural groups. U.S. Americans displayed less overconfidence when compared to Mexicans. These results were then replicated in bilingual participants, when culture was experimentally manipulated with language priming. Holistic reasoning did not account for the cross-cultural variation of overconfidence. Suggestions for future studies are discussed.
NASA Astrophysics Data System (ADS)
Vivio, Francesco; Fanelli, Pierluigi; Ferracci, Michele
2018-03-01
In aeronautical and automotive industries the use of rivets for applications requiring several joining points is now very common. In spite of a very simple shape, a riveted junction has many contact surfaces and stress concentrations that make the local stiffness very difficult to be calculated. To overcome this difficulty, commonly finite element models with very dense meshes are performed for single joint analysis because the accuracy is crucial for a correct structural analysis. Anyhow, when several riveted joints are present, the simulation becomes computationally too heavy and usually significant restrictions to joint modelling are introduced, sacrificing the accuracy of local stiffness evaluation. In this paper, we tested the accuracy of a rivet finite element presented in previous works by the authors. The structural behaviour of a lap joint specimen with a rivet joining is simulated numerically and compared to experimental measurements. The Rivet Element, based on a closed-form solution of a reference theoretical model of the rivet joint, simulates local and overall stiffness of the junction combining high accuracy with low degrees of freedom contribution. In this paper the Rivet Element performances are compared to that of a FE non-linear model of the rivet, built with solid elements and dense mesh, and to experimental data. The promising results reported allow to consider the Rivet Element able to simulate, with a great accuracy, actual structures with several rivet connections.
Bahadure, Nilesh Bhaskarrao; Ray, Arun Kumar; Thethi, Har Pal
2018-01-17
The detection of a brain tumor and its classification from modern imaging modalities is a primary concern, but a time-consuming and tedious work was performed by radiologists or clinical supervisors. The accuracy of detection and classification of tumor stages performed by radiologists is depended on their experience only, so the computer-aided technology is very important to aid with the diagnosis accuracy. In this study, to improve the performance of tumor detection, we investigated comparative approach of different segmentation techniques and selected the best one by comparing their segmentation score. Further, to improve the classification accuracy, the genetic algorithm is employed for the automatic classification of tumor stage. The decision of classification stage is supported by extracting relevant features and area calculation. The experimental results of proposed technique are evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on segmentation score, accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 92.03% accuracy, 91.42% specificity, 92.36% sensitivity, and an average segmentation score between 0.82 and 0.93 demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 93.79% dice similarity index coefficient, which indicates better overlap between the automated extracted tumor regions with manually extracted tumor region by radiologists.
Design of a Pressure Sensor Based on Optical Fiber Bragg Grating Lateral Deformation
Urban, Frantisek; Kadlec, Jaroslav; Vlach, Radek; Kuchta, Radek
2010-01-01
This paper describes steps involved in the design and realization of a new type of pressure sensor based on the optical fiber Bragg grating. A traditional pressure sensor has very limited usage in heavy industrial environments, particularly in explosive or electromagnetically noisy environments. Utilization of optics in these environments eliminates all surrounding influences. An initial motivation for our development was the research, experimental validation, and realization of a complex smart pressure sensor based on the optical principle. The main benefit of this solution consists of increasing sensitivity, resistance to electromagnetic interference, dimensions, and potential increased accuracy. PMID:22163521
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun
2016-01-01
Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287
Multi-Stage Target Tracking with Drift Correction and Position Prediction
NASA Astrophysics Data System (ADS)
Chen, Xin; Ren, Keyan; Hou, Yibin
2018-04-01
Most existing tracking methods are hard to combine accuracy and performance, and do not consider the shift between clarity and blur that often occurs. In this paper, we propound a multi-stage tracking framework with two particular modules: position prediction and corrective measure. We conduct tracking based on correlation filter with a corrective measure module to increase both performance and accuracy. Specifically, a convolutional network is used for solving the blur problem in realistic scene, training methodology that training dataset with blur images generated by the three blur algorithms. Then, we propose a position prediction module to reduce the computation cost and make tracker more capable of fast motion. Experimental result shows that our tracking method is more robust compared to others and more accurate on the benchmark sequences.
Artificial tektites: an experimental technique for capturing the shapes of spinning drops
NASA Astrophysics Data System (ADS)
Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.
2015-01-01
Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or `dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax `artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.
NASA Astrophysics Data System (ADS)
Zhang, Yongjun; Lu, Zhixin
2017-10-01
Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.
Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev
2013-01-01
Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491
Experimental study of low-cost fiber optic distributed temperature sensor system performance
NASA Astrophysics Data System (ADS)
Dashkov, Michael V.; Zharkov, Alexander D.
2016-03-01
The distributed control of temperature is an actual task for various application such as oil & gas fields, high-voltage power lines, fire alarm systems etc. The most perspective are optical fiber distributed temperature sensors (DTS). They have advantages on accuracy, resolution and range, but have a high cost. Nevertheless, for some application the accuracy of measurement and localization aren't so important as cost. The results of an experimental study of low-cost Raman based DTS based on standard OTDR are represented.
Vertical Accuracy Evaluation of Aster GDEM2 Over a Mountainous Area Based on Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Liang, Y.; Qu, Y.; Guo, D.; Cui, T.
2018-05-01
Global digital elevation models (GDEM) provide elementary information on heights of the Earth's surface and objects on the ground. GDEMs have become an important data source for a range of applications. The vertical accuracy of a GDEM is critical for its applications. Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with traditional surveying techniques, UAV photogrammetry are more convenient and more cost-effective. UAV photogrammetry produces the DEM of the survey area with high accuracy and high spatial resolution. As a result, DEMs resulted from UAV photogrammetry can be used for a more detailed and accurate evaluation of the GDEM product. This study investigates the vertical accuracy (in terms of elevation accuracy and systematic errors) of the ASTER GDEM Version 2 dataset over a complex terrain based on UAV photogrammetry. Experimental results show that the elevation errors of ASTER GDEM2 are in normal distribution and the systematic error is quite small. The accuracy of the ASTER GDEM2 coincides well with that reported by the ASTER validation team. The accuracy in the research area is negatively correlated to both the slope of the terrain and the number of stereo observations. This study also evaluates the vertical accuracy of the up-sampled ASTER GDEM2. Experimental results show that the accuracy of the up-sampled ASTER GDEM2 data in the research area is not significantly reduced by the complexity of the terrain. The fine-grained accuracy evaluation of the ASTER GDEM2 is informative for the GDEM-supported UAV photogrammetric applications.
Fringe Capacitance Correction for a Coaxial Soil Cell
Pelletier, Mathew G.; Viera, Joseph A.; Schwartz, Robert C.; Lascano, Robert J.; Evett, Steven R.; Green, Tim R.; Wanjura, John D.; Holt, Greg A.
2011-01-01
Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance which is then used to correct the experimental results such that experimental measurements utilizing differing coaxial cell diameters and probe lengths, upon correction with the Poisson model derived correction factor, all produce the same results thereby lending support and for an augmented measurement technique for measurement of absolute permittivity. PMID:22346601
NASA Technical Reports Server (NTRS)
Vallet, M.
1982-01-01
The acoustical index leg was studied to determine its accuracy in predicting annoyance from traffic noise. Annoyance was tested in experimental situations where the frequency of the number of heavy vehicles varied from 3 to 30 HV/30 min for different classes of the Leg level at 50, 55, 60 dB(A) of traffic noise. The results showed that: (1) for a constant Leg level the annoyance increases as a function of the number of HV up to a certain threshold at which the annoyance is stabilized; (2) for a constant frequency of passage of HV, the annoyance increases with the Leg level; (3) composite indexes of the type Leg + Log NHV, L1 + EMER or L1 + L10 give a predictive value greater than that of the Leg pr Log nHV taken alone.
Motor Inhibition Affects the Speed But Not Accuracy of Aimed Limb Movements in an Insect
Calas-List, Delphine; Clare, Anthony J.; Komissarova, Alexandra; Nielsen, Thomas A.
2014-01-01
When reaching toward a target, human subjects use slower movements to achieve higher accuracy, and this can be accompanied by increased limb impedance (stiffness, viscosity) that stabilizes movements against motor noise and external perturbation. In arthropods, the activity of common inhibitory motor neurons influences limb impedance, so we hypothesized that this might provide a mechanism for speed and accuracy control of aimed movements in insects. We recorded simultaneously from excitatory leg motor neurons and from an identified common inhibitory motor neuron (CI1) in locusts that performed natural aimed scratching movements. We related limb movement kinematics to recorded motor activity and demonstrate that imposed alterations in the activity of CI1 influenced these kinematics. We manipulated the activity of CI1 by injecting depolarizing or hyperpolarizing current or killing the cell using laser photoablation. Naturally higher levels of inhibitory activity accompanied faster movements. Experimentally biasing the firing rate downward, or stopping firing completely, led to slower movements mediated by changes at several joints of the limb. Despite this, we found no effect on overall movement accuracy. We conclude that inhibitory modulation of joint stiffness has effects across most of the working range of the insect limb, with a pronounced effect on the overall velocity of natural movements independent of their accuracy. Passive joint forces that are greatest at extreme joint angles may enhance accuracy and are not affected by motor inhibition. PMID:24872556
New method of 2-dimensional metrology using mask contouring
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka
2008-10-01
We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.
Hüfner, T; Geerling, J; Oldag, G; Richter, M; Kfuri, M; Pohlemann, T; Krettek, C
2005-01-01
This study was designed to determine the clinical relevant accuracy of CT-based navigation for drilling. Experimental model. Laboratory. Twelve drills of varying lengths and diameters were tested with 2 different set-ups. Group 1 used free-hand navigated drilling technique with foam blocks equipped with titanium target points. Group 2 (control) used a newly developed 3-dimensional measurement device equipped with titanium target points with a fixed entry for the navigated drill to minimize bending forces. One examiner performed 690 navigated drillings using solely the monitor screen for control in both groups. The difference between the planned and the actual starting and target point (up to 150 mm distance) was measured (mm). Levene test and a nonpaired t test. Significance level was set as P < 0.05. The core accuracy of the navigation system measured with the 3-dimensional device was 0.5 mm. The mean distance from planned to actual entry points in group 1 was 1.3 (range, 0.6-3.4 mm). The mean distance between planned and actual target point was 3.4 (range, 1.7-5.8 mm). Free-hand navigated drilling showed an increased difference with increased length of the drill bits as well as with increased drilling channel for drill bits 2.5 and 3.2 mm and not for 3.5 and 4.5 mm (P < 0.05). The core accuracy of the navigation system is high. Compared with the navigated free-hand technique, the results suggest that drill bit deflection interferes directly with the precision. The precision is decreased when using small diameter and longer drill bits.
Prol, Fabricio dos Santos; El Issaoui, Aimad; Hakala, Teemu
2018-01-01
The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. PMID:29522467
DOT National Transportation Integrated Search
1977-06-01
Flight tests were conducted at the National Aviation Facilities Experimental : Center (NAFEC) using a general aviation area navigation (RNAV) system to : investigate system accuracies and resultant airspace requirements in the : terminal area. Issues...
Digital image analysis: improving accuracy and reproducibility of radiographic measurement.
Bould, M; Barnard, S; Learmonth, I D; Cunningham, J L; Hardy, J R
1999-07-01
To assess the accuracy and reproducibility of a digital image analyser and the human eye, in measuring radiographic dimensions. We experimentally compared radiographic measurement using either an image analyser system or the human eye with digital caliper. The assessment of total hip arthroplasty wear from radiographs relies on both the accuracy of radiographic images and the accuracy of radiographic measurement. Radiographs were taken of a slip gauge (30+/-0.00036 mm) and slip gauge with a femoral stem. The projected dimensions of the radiographic images were calculated by trigonometry. The radiographic dimensions were then measured by blinded observers using both techniques. For a single radiograph, the human eye was accurate to 0.26 mm and reproducible to +/-0.1 mm. In comparison the digital image analyser system was accurate to 0.01 mm with a reproducibility of +/-0.08 mm. In an arthroplasty model, where the dimensions of an object were corrected for magnification by the known dimensions of a femoral head, the human eye was accurate to 0.19 mm, whereas the image analyser system was accurate to 0.04 mm. The digital image analysis system is up to 20 times more accurate than the human eye, and in an arthroplasty model the accuracy of measurement increases four-fold. We believe such image analysis may allow more accurate and reproducible measurement of wear from standard follow-up radiographs.
A reconsideration of negative ratings for network-based recommendation
NASA Astrophysics Data System (ADS)
Hu, Liang; Ren, Liang; Lin, Wenbin
2018-01-01
Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.
Analysis of oil consumption in cylinder of diesel engine for optimization of piston rings
NASA Astrophysics Data System (ADS)
Zhang, Junhong; Zhang, Guichang; He, Zhenpeng; Lin, Jiewei; Liu, Hai
2013-01-01
The performance and particulate emission of a diesel engine are affected by the consumption of lubricating oil. Most studies on oil consumption mechanism of the cylinder have been done by using the experimental method, however they are very costly. Therefore, it is very necessary to study oil consumption mechanism of the cylinder and obtain the accurate results by the calculation method. Firstly, four main modes of lubricating oil consumption in cylinder are analyzed and then the oil consumption rate under common working conditions are calculated for the four modes based on an engine. Then, the factors that affect the lubricating oil consumption such as working conditions, the second ring closed gap, the elastic force of the piston rings are also investigated for the four modes. The calculation results show that most of the lubricating oil is consumed by evaporation on the liner surface. Besides, there are three other findings: (1) The oil evaporation from the liner is determined by the working condition of an engine; (2) The increase of the ring closed gap reduces the oil blow through the top ring end gap but increases blow-by; (3) With the increase of the elastic force of the ring, both the left oil film thickness and the oil throw-off at the top ring decrease. The oil scraping of the piston top edge is consequently reduced while the friction loss between the rings and the liner increases. A neural network prediction model of the lubricating oil consumption in cylinder is established based on the BP neural network theory, and then the model is trained and validated. The main piston rings parameters which affect the oil consumption are optimized by using the BP neural network prediction model and the prediction accuracy of this BP neural network is within 8%, which is acceptable for normal engineering applications. The oil consumption is also measured experimentally. The relative errors of the calculated and experimental values are less than 10%, verifying the validity of the simulation results. Applying the established simulation model and the validated BP network model is able to generate numerical results with sufficient accuracy, which significantly reduces experimental work and provides guidance for the optimal design of the piston rings diesel engines.
Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.
Monica, Stefania; Ferrari, Gianluigi
2018-05-17
Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.
NASA Astrophysics Data System (ADS)
Nouri, N. M.; Mostafapour, K.; Kamran, M.
2018-02-01
In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Bioinspiration: applying mechanical design to experimental biology.
Flammang, Brooke E; Porter, Marianne E
2011-07-01
The production of bioinspired and biomimetic constructs has fostered much collaboration between biologists and engineers, although the extent of biological accuracy employed in the designs produced has not always been a priority. Even the exact definitions of "bioinspired" and "biomimetic" differ among biologists, engineers, and industrial designers, leading to confusion regarding the level of integration and replication of biological principles and physiology. By any name, biologically-inspired mechanical constructs have become an increasingly important research tool in experimental biology, offering the opportunity to focus research by creating model organisms that can be easily manipulated to fill a desired parameter space of structural and functional repertoires. Innovative researchers with both biological and engineering backgrounds have found ways to use bioinspired models to explore the biomechanics of organisms from all kingdoms to answer a variety of different questions. Bringing together these biologists and engineers will hopefully result in an open discourse of techniques and fruitful collaborations for experimental and industrial endeavors.
NASA Astrophysics Data System (ADS)
Kuehndel, J.; Kerler, B.; Karcher, C.
2018-04-01
To improve performance of heat exchangers for vehicle applications, it is necessary to increase the air side heat transfer. Selective laser melting gives rise to be applied for fin development due to: i) independency of conventional tooling ii) a fast way to conduct essential experimental studies iii) high dimensional accuracy iv) degrees of freedom in design. Therefore, heat exchanger elements with wavy fins were examined in an experimental study. Experiments were conducted for air side Reynolds number range of 1400-7400, varying wavy amplitude and wave length of the fins at a constant water flow rate of 9.0 m3/h. Heat transfer and pressure drop characteristics were evaluated with Nusselt Number Nu and Darcy friction factor ψ as functions of Reynolds number. Heat transfer and pressure drop correlations were derived from measurement data obtained by regression analysis.
Ferrets as Models for Influenza Virus Transmission Studies and Pandemic Risk Assessments
Barclay, Wendy; Barr, Ian; Fouchier, Ron A.M.; Matsuyama, Ryota; Nishiura, Hiroshi; Peiris, Malik; Russell, Charles J.; Subbarao, Kanta; Zhu, Huachen
2018-01-01
The ferret transmission model is extensively used to assess the pandemic potential of emerging influenza viruses, yet experimental conditions and reported results vary among laboratories. Such variation can be a critical consideration when contextualizing results from independent risk-assessment studies of novel and emerging influenza viruses. To streamline interpretation of data generated in different laboratories, we provide a consensus on experimental parameters that define risk-assessment experiments of influenza virus transmissibility, including disclosure of variables known or suspected to contribute to experimental variability in this model, and advocate adoption of more standardized practices. We also discuss current limitations of the ferret transmission model and highlight continued refinements and advances to this model ongoing in laboratories. Understanding, disclosing, and standardizing the critical parameters of ferret transmission studies will improve the comparability and reproducibility of pandemic influenza risk assessment and increase the statistical power and, perhaps, accuracy of this model. PMID:29774862
A simple and sensitive method to measure timing accuracy.
De Clercq, Armand; Crombez, Geert; Buysse, Ann; Roeyers, Herbert
2003-02-01
Timing accuracy in presenting experimental stimuli (visual information on a PC or on a TV) and responding (keyboard presses and mouse signals) is of importance in several experimental paradigms. In this article, a simple system for measuring timing accuracy is described. The system uses two PCs (at least Pentium II, 200 MHz), a photocell, and an amplifier. No additional boards and timing hardware are needed. The first PC, a SlavePC, monitors the keyboard presses or mouse signals from the PC under test and uses a photocell that is placed in front of the screen to detect the appearance of visual stimuli on the display. The software consists of a small program running on the SlavePC. The SlavePC is connected through a serial line with a second PC. This MasterPC controls the SlavePC through an ActiveX control, which is used in a Visual Basic program. The accuracy of our system was investigated by using a similar setup of a SlavePC and a MasterPC to generate pulses and by using a pulse generator card. These tests revealed that our system has a 0.01-msec accuracy. As an illustration, the reaction time accuracy of INQUISIT for a few applications was tested using our system. It was found that in those applications that we investigated, INQUISIT measures reaction times from keyboard presses with millisecond accuracy.
PubChem3D: conformer ensemble accuracy
2013-01-01
Background PubChem is a free and publicly available resource containing substance descriptions and their associated biological activity information. PubChem3D is an extension to PubChem containing computationally-derived three-dimensional (3-D) structures of small molecules. All the tools and services that are a part of PubChem3D rely upon the quality of the 3-D conformer models. Construction of the conformer models currently available in PubChem3D involves a clustering stage to sample the conformational space spanned by the molecule. While this stage allows one to downsize the conformer models to more manageable size, it may result in a loss of the ability to reproduce experimentally determined “bioactive” conformations, for example, found for PDB ligands. This study examines the extent of this accuracy loss and considers its effect on the 3-D similarity analysis of molecules. Results The conformer models consisting of up to 100,000 conformers per compound were generated for 47,123 small molecules whose structures were experimentally determined, and the conformers in each conformer model were clustered to reduce the size of the conformer model to a maximum of 500 conformers per molecule. The accuracy of the conformer models before and after clustering was evaluated using five different measures: root-mean-square distance (RMSD), shape-optimized shape-Tanimoto (STST-opt) and combo-Tanimoto (ComboTST-opt), and color-optimized color-Tanimoto (CTCT-opt) and combo-Tanimoto (ComboTCT-opt). On average, the effect of clustering decreased the conformer model accuracy, increasing the conformer ensemble’s RMSD to the bioactive conformer (by 0.18 ± 0.12 Å), and decreasing the STST-opt, ComboTST-opt, CTCT-opt, and ComboTCT-opt scores (by 0.04 ± 0.03, 0.16 ± 0.09, 0.09 ± 0.05, and 0.15 ± 0.09, respectively). Conclusion This study shows the RMSD accuracy performance of the PubChem3D conformer models is operating as designed. In addition, the effect of PubChem3D sampling on 3-D similarity measures shows that there is a linear degradation of average accuracy with respect to molecular size and flexibility. Generally speaking, one can likely expect the worst-case minimum accuracy of 90% or more of the PubChem3D ensembles to be 0.75, 1.09, 0.43, and 1.13, in terms of STST-opt, ComboTST-opt, CTCT-opt, and ComboTCT-opt, respectively. This expected accuracy improves linearly as the molecule becomes smaller or less flexible. PMID:23289532
Measurement of Laser Weld Temperatures for 3D Model Input
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dagel, Daryl; Grossetete, Grant; Maccallum, Danny O.
Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defectsmore » and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.« less
Correcting For Seed-Particle Lag In LV Measurements
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.
1994-01-01
Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.
Recent ARPES experiments on quasi-1D bulk materials and artificial structures.
Grioni, M; Pons, S; Frantzeskakis, E
2009-01-14
The spectroscopy of quasi-one-dimensional (1D) systems has been a subject of strong interest since the first experimental observations of unusual line shapes in the early 1990s. Angle-resolved photoemission (ARPES) measurements performed with increasing accuracy have greatly broadened our knowledge of the properties of bulk 1D materials and, more recently, of artificial 1D structures. They have yielded a direct view of 1D bands, of open Fermi surfaces, and of characteristic instabilities. They have also provided unique microscopic evidence for the non-conventional, non-Fermi-liquid, behavior predicted by theory, and for strong and singular interactions. Here we briefly review some of the remarkable experimental results obtained in the last decade.
Experimental Study of under-platform Damper Kinematics in Presence of Blade Dynamics
NASA Astrophysics Data System (ADS)
Botto, D.; Gastaldi, C.; Gola, M. M.; Umer, M.
2018-01-01
Among the different devices used in the aerospace industries under-platform dampers are widely used in turbo engines to mitigate the blade vibration. Nevertheless, the damper behaviour is not easy to simulate and engineers have been working in order to improve the accuracy with which theoretical contact models predict the damper behaviour. Majority of the experimental setups collect experimental data in terms of blade amplitude reduction which do not increase the knowledge about the damper dynamics and therefore the uncertainty on the damper behaviour remains a big issue. In this paper, a novel test rig has been purposely designed to accommodate a single blade and two under-platform dampers to deeply investigate the damper-blade interactions. In this test bench, a contact force measuring system was designed to extensively measure the damper contact forces. Damper kinematics is rebuilt by using the relative displacement measured between damper and blade. This paper describes the concept behind the new approach, shows the details of new test rig and discusses experimental results by comparing with previously measured results on an old experimental setup.
Fan, Shu-Han; Chou, Chia-Ching; Chen, Wei-Chen; Fang, Wai-Chi
2015-01-01
In this study, an effective real-time obstructive sleep apnea (OSA) detection method from frequency analysis of ECG-derived respiratory (EDR) and heart rate variability (HRV) is proposed. Compared to traditional Polysomnography (PSG) which needs several physiological signals measured from patients, the proposed OSA detection method just only use ECG signals to determine the time interval of OSA. In order to be feasible to be implemented in hardware to achieve the real-time detection and portable application, the simplified Lomb Periodogram is utilized to perform the frequency analysis of EDR and HRV in this study. The experimental results of this work indicate that the overall accuracy can be effectively increased with values of Specificity (Sp) of 91%, Sensitivity (Se) of 95.7%, and Accuracy of 93.2% by integrating the EDR and HRV indexes.
Towards Cooperative Predictive Data Mining in Competitive Environments
NASA Astrophysics Data System (ADS)
Lisý, Viliam; Jakob, Michal; Benda, Petr; Urban, Štěpán; Pěchouček, Michal
We study the problem of predictive data mining in a competitive multi-agent setting, in which each agent is assumed to have some partial knowledge required for correctly classifying a set of unlabelled examples. The agents are self-interested and therefore need to reason about the trade-offs between increasing their classification accuracy by collaborating with other agents and disclosing their private classification knowledge to other agents through such collaboration. We analyze the problem and propose a set of components which can enable cooperation in this otherwise competitive task. These components include measures for quantifying private knowledge disclosure, data-mining models suitable for multi-agent predictive data mining, and a set of strategies by which agents can improve their classification accuracy through collaboration. The overall framework and its individual components are validated on a synthetic experimental domain.
Discriminative correlation filter tracking with occlusion detection
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Chen, Zhong; Yu, XiPeng; Zhang, Ting; He, Jing
2018-03-01
Aiming at the problem that the correlation filter-based tracking algorithm can not track the target of severe occlusion, a target re-detection mechanism is proposed. First of all, based on the ECO, we propose the multi-peak detection model and the response value to distinguish the occlusion and deformation in the target tracking, which improve the success rate of tracking. And then we add the confidence model to update the mechanism to effectively prevent the model offset problem which due to similar targets or background during the tracking process. Finally, the redetection mechanism of the target is added, and the relocation is performed after the target is lost, which increases the accuracy of the target positioning. The experimental results demonstrate that the proposed tracker performs favorably against state-of-the-art methods in terms of robustness and accuracy.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments.
Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments
Youngblut, Nicholas D.; Barnett, Samuel E.; Buckley, Daniel H.
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. PMID:29643843
Edwards, Stefan M.; Sørensen, Izel F.; Sarup, Pernille; Mackay, Trudy F. C.; Sørensen, Peter
2016-01-01
Predicting individual quantitative trait phenotypes from high-resolution genomic polymorphism data is important for personalized medicine in humans, plant and animal breeding, and adaptive evolution. However, this is difficult for populations of unrelated individuals when the number of causal variants is low relative to the total number of polymorphisms and causal variants individually have small effects on the traits. We hypothesized that mapping molecular polymorphisms to genomic features such as genes and their gene ontology categories could increase the accuracy of genomic prediction models. We developed a genomic feature best linear unbiased prediction (GFBLUP) model that implements this strategy and applied it to three quantitative traits (startle response, starvation resistance, and chill coma recovery) in the unrelated, sequenced inbred lines of the Drosophila melanogaster Genetic Reference Panel. Our results indicate that subsetting markers based on genomic features increases the predictive ability relative to the standard genomic best linear unbiased prediction (GBLUP) model. Both models use all markers, but GFBLUP allows differential weighting of the individual genetic marker relationships, whereas GBLUP weighs the genetic marker relationships equally. Simulation studies show that it is possible to further increase the accuracy of genomic prediction for complex traits using this model, provided the genomic features are enriched for causal variants. Our GFBLUP model using prior information on genomic features enriched for causal variants can increase the accuracy of genomic predictions in populations of unrelated individuals and provides a formal statistical framework for leveraging and evaluating information across multiple experimental studies to provide novel insights into the genetic architecture of complex traits. PMID:27235308
Chitinase enzyme activity in CSF is a powerful biomarker of Alzheimer disease.
Watabe-Rudolph, M; Song, Z; Lausser, L; Schnack, C; Begus-Nahrmann, Y; Scheithauer, M-O; Rettinger, G; Otto, M; Tumani, H; Thal, D R; Attems, J; Jellinger, K A; Kestler, H A; von Arnim, C A F; Rudolph, K L
2012-02-21
DNA damage accumulation in brain is associated with the development of Alzheimer disease (AD), but newly identified protein markers of DNA damage have not been evaluated in the diagnosis of AD and other forms of dementia. Here, we analyzed the level of novel biomarkers of DNA damage and telomere dysfunction (chitinase activity, N-acetyl-glucosaminidase activity, stathmin, and EF-1α) in CSF of 94 patients with AD, 41 patients with non-AD dementia, and 40 control patients without dementia. Enzymatic activity of chitinase (chitotriosidase activity) and stathmin protein level were significantly increased in CSF of patients with AD and non-AD dementia compared with that of no dementia control patients. As a single marker, chitinase activity was most powerful for distinguishing patients with AD from no dementia patients with an accuracy of 85.8% using a single threshold. Discrimination was even superior to clinically standard CSF markers that showed an accuracy of 78.4% (β-amyloid) and 77.6% (tau). Combined analysis of chitinase with other markers increased the accuracy to a maximum of 91%. The biomarkers of DNA damage were also increased in CSF of patients with non-AD dementia compared with no dementia patients, and the new biomarkers improved the diagnosis of non-AD dementia as well as the discrimination of AD from non-AD dementia. Taken together, the findings in this study provide experimental evidence that DNA damage markers are significantly increased in AD and non-AD dementia. The biomarkers identified outperformed the standard CSF markers for diagnosing AD and non-AD dementia in the cohort investigated.
NASA Technical Reports Server (NTRS)
Carpenter, Paul
2003-01-01
Electron-probe microanalysis standards and issues related to measurement and accuracy of microanalysis will be discussed. Critical evaluation of standards based on homogeneity and comparison with wet-chemical analysis will be made. Measurement problems such as spectrometer dead-time will be discussed. Analytical accuracy issues will be evaluated for systems by alpha-factor analysis and comparison with experimental k-ratio databases.
Experimental studies of high-accuracy RFID localization with channel impairments
NASA Astrophysics Data System (ADS)
Pauls, Eric; Zhang, Yimin D.
2015-05-01
Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.
Cascetta, Furio; Palombo, Adolfo; Scalabrini, Gianfranco
2003-04-01
In this paper the metrological behavior of two different insertion flowmeters (magnetic and turbine types) in large water pipes is described. A master-slave calibration was carried out in order to estimate the overall uncertainty of the tested meters. The experimental results show that (i) the magnetic insertion tested flowmeter performs the claimed accuracy (+/- 2%) within all the flow range (20:1); (ii) the insertion turbine tested meter, instead, reaches the claimed accuracy just in the upper zone of the flow range.
Hollyday, E.F.; Hansen, G.R.
1983-01-01
Streamflow may be estimated with regression equations that relate streamflow characteristics to characteristics of the drainage basin. A statistical experiment was performed to compare the accuracy of equations using basin characteristics derived from maps and climatological records (control group equations) with the accuracy of equations using basin characteristics derived from Landsat data as well as maps and climatological records (experimental group equations). Results show that when the equations in both groups are arranged into six flow categories, there is no substantial difference in accuracy between control group equations and experimental group equations for this particular site where drainage area accounts for more than 90 percent of the variance in all streamflow characteristics (except low flows and most annual peak logarithms). (USGS)
Culture and Probability Judgment Accuracy: The Influence of Holistic Reasoning
Lechuga, Julia; Wiebe, John S.
2012-01-01
A well-established phenomenon in the judgment and decision-making tradition is the overconfidence one places in the amount of knowledge that one possesses. Overconfidence or probability judgment accuracy varies not only individually but also across cultures. However, research efforts to explain cross-cultural variations in the overconfidence phenomenon have seldom been made. In Study 1, the authors compared the probability judgment accuracy of U.S. Americans (N = 108) and Mexican participants (N = 100). In Study 2, they experimentally primed culture by randomly assigning English/Spanish bilingual Mexican Americans (N = 195) to response language. Results of both studies replicated the cross-cultural variation of probability judgment accuracy previously observed in other cultural groups. U.S. Americans displayed less overconfidence when compared to Mexicans. These results were then replicated in bilingual participants, when culture was experimentally manipulated with language priming. Holistic reasoning did not account for the cross-cultural variation of overconfidence. Suggestions for future studies are discussed. PMID:22879682
NASA Astrophysics Data System (ADS)
Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang
2018-02-01
This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.
Feng, L X; Yao, J Y; Chen, L; Tang, Y; Hou, F
2016-08-01
To discuss the application of disparity discriminating accuracy test in evaluating the stereopsis of postoperative intermittent exotropia. Patients with intermittent exotropia who underwent surgery during July 2011 to June 2013 were followed up. The stereoacuity was examined by Titmus Stereotest, Randot Stereotest and Frisby Stereotest. Twenty adult cases whose stereoacuity reached normal were chosen as experimental group. Twenty healthy adults were selected as normal control group. Both groups were examined with disparity discriminating accuracy test. Discriminating accuracy of the two groups were analyzed with Two-Way ANOVA method. Test-retest reliability was analyzed with Intraclass Correlation Coefficient analysis. The test-retest reliability of disparity discriminating accuracy test is excellent (ICC=0.99, P<0.01) . Discriminating accuracy under different disparities in experimental group were 0.56±0.09, 0.67±0.14, 0.77±0.15, 0.82±0.14, 0.85±0.11, 0.85±0.14, 0.87±0.10, 0.84±0.16, while those in control group were 0.77±0.09, 0.88±0.09, 0.93±0.08, 0.91±0.09, 0.95±0.08, 0.96±0.05, 0.97±0.06, 0.96±0.04. There were statistically significant differences between them (F=38.06, P<0.01) . The discriminating ability of group grating in both groups was affected by the size of disparity. Under situation of small disparity, a large difference was found between the experimental group (0.67±0.12)and control group(0.86±0.07) (F=4.84, P<0.05). Stereoscopic function can be evaluated comprehensively with disparity discriminating accuracy test. Use this test, a certain degree of dysfunction in stereopsis can still be found in postoperative intermittent exotropic patients who reached normal stereoacuity examined with traditional stereotests. (Chin J Ophthalmol, 2016, 52: 584-588).
Kunstman, Jonathan W.; Clerkin, Elise M.; Palmer, Kateyln; Peters, M. Taylar; Dodd, Dorian R.; Smith, April R.
2015-01-01
Background and Objectives This study tested whether relatively low levels of interoceptive accuracy (IAcc) are associated with body dysmorphic disorder (BDD) symptoms. Additionally, given research indicating that power attunes individuals to their internal states, we sought to determine if state interoceptive accuracy could be improved through an experimental manipulation of power. Method Undergraduate women (N = 101) completed a baseline measure of interoceptive accuracy and then were randomized to a power or control condition. Participants were primed with power or a neutral control topic and then completed a post-manipulation measure of state IAcc. Trait BDD symptoms were assessed with a self-report measure. Results Controlling for baseline IAcc, within the control condition, there was a significant inverse relationship between trait BDD symptoms and interoceptive accuracy. Continuing to control for baseline IAcc, within the power condition, there was not a significant relationship between trait BDD symptoms and IAcc, suggesting that power may have attenuated this relationship. At high levels of BDD symptomology, there was also a significant simple effect of experimental condition, such that participants in the power (vs. control) condition had better interoceptive accuracy. These results provide initial evidence that power may positively impact interoceptive accuracy among those with high levels of BDD symptoms. Limitations This cross-sectional study utilized a demographically homogenous sample of women that reflected a broad range of symptoms; thus, although there were a number of participants reporting elevated BDD symptoms, these findings might not generalize to other populations or clinical samples. Conclusions . This study provides the first direct test of the relationship between trait BDD symptoms and IAcc, and provides preliminary evidence that among those with severe BDD symptoms, power may help connect individuals with their internal states. Future research testing the mechanisms linking BDD symptoms with IAcc, as well as how individuals can better connect with their internal experiences is needed. PMID:26295932
Kunstman, Jonathan W; Clerkin, Elise M; Palmer, Kateyln; Peters, M Taylar; Dodd, Dorian R; Smith, April R
2016-03-01
This study tested whether relatively low levels of interoceptive accuracy (IAcc) are associated with body dysmorphic disorder (BDD) symptoms. Additionally, given research indicating that power attunes individuals to their internal states, we sought to determine if state interoceptive accuracy could be improved through an experimental manipulation of power.. Undergraduate women (N = 101) completed a baseline measure of interoceptive accuracy and then were randomized to a power or control condition. Participants were primed with power or a neutral control topic and then completed a post-manipulation measure of state IAcc. Trait BDD symptoms were assessed with a self-report measure. Controlling for baseline IAcc, within the control condition, there was a significant inverse relationship between trait BDD symptoms and interoceptive accuracy. Continuing to control for baseline IAcc, within the power condition, there was not a significant relationship between trait BDD symptoms and IAcc, suggesting that power may have attenuated this relationship. At high levels of BDD symptomology, there was also a significant simple effect of experimental condition, such that participants in the power (vs. control) condition had better interoceptive accuracy. These results provide initial evidence that power may positively impact interoceptive accuracy among those with high levels of BDD symptoms.. This cross-sectional study utilized a demographically homogenous sample of women that reflected a broad range of symptoms; thus, although there were a number of participants reporting elevated BDD symptoms, these findings might not generalize to other populations or clinical samples. This study provides the first direct test of the relationship between trait BDD symptoms and IAcc, and provides preliminary evidence that among those with severe BDD symptoms, power may help connect individuals with their internal states. Future research testing the mechanisms linking BDD symptoms with IAcc, as well as how individuals can better connect with their internal experiences is needed.. Copyright © 2015 Elsevier Ltd. All rights reserved.
Support vector machine incremental learning triggered by wrongly predicted samples
NASA Astrophysics Data System (ADS)
Tang, Ting-long; Guan, Qiu; Wu, Yi-rong
2018-05-01
According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.
Hybrid feature selection algorithm using symmetrical uncertainty and a harmony search algorithm
NASA Astrophysics Data System (ADS)
Salameh Shreem, Salam; Abdullah, Salwani; Nazri, Mohd Zakree Ahmad
2016-04-01
Microarray technology can be used as an efficient diagnostic system to recognise diseases such as tumours or to discriminate between different types of cancers in normal tissues. This technology has received increasing attention from the bioinformatics community because of its potential in designing powerful decision-making tools for cancer diagnosis. However, the presence of thousands or tens of thousands of genes affects the predictive accuracy of this technology from the perspective of classification. Thus, a key issue in microarray data is identifying or selecting the smallest possible set of genes from the input data that can achieve good predictive accuracy for classification. In this work, we propose a two-stage selection algorithm for gene selection problems in microarray data-sets called the symmetrical uncertainty filter and harmony search algorithm wrapper (SU-HSA). Experimental results show that the SU-HSA is better than HSA in isolation for all data-sets in terms of the accuracy and achieves a lower number of genes on 6 out of 10 instances. Furthermore, the comparison with state-of-the-art methods shows that our proposed approach is able to obtain 5 (out of 10) new best results in terms of the number of selected genes and competitive results in terms of the classification accuracy.
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-01-01
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Kainz, Hans; Hoang, Hoa X; Stockton, Chris; Boyd, Roslyn R; Lloyd, David G; Carty, Christopher P
2017-10-01
Gait analysis together with musculoskeletal modeling is widely used for research. In the absence of medical images, surface marker locations are used to scale a generic model to the individual's anthropometry. Studies evaluating the accuracy and reliability of different scaling approaches in a pediatric and/or clinical population have not yet been conducted and, therefore, formed the aim of this study. Magnetic resonance images (MRI) and motion capture data were collected from 12 participants with cerebral palsy and 6 typically developed participants. Accuracy was assessed by comparing the scaled model's segment measures to the corresponding MRI measures, whereas reliability was assessed by comparing the model's segments scaled with the experimental marker locations from the first and second motion capture session. The inclusion of joint centers into the scaling process significantly increased the accuracy of thigh and shank segment length estimates compared to scaling with markers alone. Pelvis scaling approaches which included the pelvis depth measure led to the highest errors compared to the MRI measures. Reliability was similar between scaling approaches with mean ICC of 0.97. The pelvis should be scaled using pelvic width and height and the thigh and shank segment should be scaled using the proximal and distal joint centers.
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
Jia, Cang-Zhi; He, Wen-Ying; Yao, Yu-Hua
2017-03-01
Hydroxylation of proline or lysine residues in proteins is a common post-translational modification event, and such modifications are found in many physiological and pathological processes. Nonetheless, the exact molecular mechanism of hydroxylation remains under investigation. Because experimental identification of hydroxylation is time-consuming and expensive, bioinformatics tools with high accuracy represent desirable alternatives for large-scale rapid identification of protein hydroxylation sites. In view of this, we developed a supporter vector machine-based tool, OH-PRED, for the prediction of protein hydroxylation sites using the adapted normal distribution bi-profile Bayes feature extraction in combination with the physicochemical property indexes of the amino acids. In a jackknife cross validation, OH-PRED yields an accuracy of 91.88% and a Matthew's correlation coefficient (MCC) of 0.838 for the prediction of hydroxyproline sites, and yields an accuracy of 97.42% and a MCC of 0.949 for the prediction of hydroxylysine sites. These results demonstrate that OH-PRED increased significantly the prediction accuracy of hydroxyproline and hydroxylysine sites by 7.37 and 14.09%, respectively, when compared with the latest predictor PredHydroxy. In independent tests, OH-PRED also outperforms previously published methods.
Accuracy of binding mode prediction with a cascadic stochastic tunneling method.
Fischer, Bernhard; Basili, Serena; Merlitz, Holger; Wenzel, Wolfgang
2007-07-01
We investigate the accuracy of the binding modes predicted for 83 complexes of the high-resolution subset of the ASTEX/CCDC receptor-ligand database using the atomistic FlexScreen approach with a simple forcefield-based scoring function. The median RMS deviation between experimental and predicted binding mode was just 0.83 A. Over 80% of the ligands dock within 2 A of the experimental binding mode, for 60 complexes the docking protocol locates the correct binding mode in all of ten independent simulations. Most docking failures arise because (a) the experimental structure clashed in our forcefield and is thus unattainable in the docking process or (b) because the ligand is stabilized by crystal water. 2007 Wiley-Liss, Inc.
All-atom 3D structure prediction of transmembrane β-barrel proteins from sequences.
Hayat, Sikander; Sander, Chris; Marks, Debora S; Elofsson, Arne
2015-04-28
Transmembrane β-barrels (TMBs) carry out major functions in substrate transport and protein biogenesis but experimental determination of their 3D structure is challenging. Encouraged by successful de novo 3D structure prediction of globular and α-helical membrane proteins from sequence alignments alone, we developed an approach to predict the 3D structure of TMBs. The approach combines the maximum-entropy evolutionary coupling method for predicting residue contacts (EVfold) with a machine-learning approach (boctopus2) for predicting β-strands in the barrel. In a blinded test for 19 TMB proteins of known structure that have a sufficient number of diverse homologous sequences available, this combined method (EVfold_bb) predicts hydrogen-bonded residue pairs between adjacent β-strands at an accuracy of ∼70%. This accuracy is sufficient for the generation of all-atom 3D models. In the transmembrane barrel region, the average 3D structure accuracy [template-modeling (TM) score] of top-ranked models is 0.54 (ranging from 0.36 to 0.85), with a higher (44%) number of residue pairs in correct strand-strand registration than in earlier methods (18%). Although the nonbarrel regions are predicted less accurately overall, the evolutionary couplings identify some highly constrained loop residues and, for FecA protein, the barrel including the structure of a plug domain can be accurately modeled (TM score = 0.68). Lower prediction accuracy tends to be associated with insufficient sequence information and we therefore expect increasing numbers of β-barrel families to become accessible to accurate 3D structure prediction as the number of available sequences increases.
Improving IMES Localization Accuracy by Integrating Dead Reckoning Information
Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki
2016-01-01
Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492
Increased dimensionality of cell-cell communication can decrease the precision of gradient sensing
NASA Astrophysics Data System (ADS)
Smith, Tyler; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
Gradient sensing is a biological computation that involves comparison of concentrations measured in at least two different locations. As such, the pre- cision of gradient sensing is limited by the intrinsic stochasticity in the com- munication that brings such distributed information to the same location. We have recently analyzed such limitations experimentally and theoretically in multicellular gradient sensing in mammary epithelial cell organoids. For 1d chains of collectively sensing cells, the communication noise puts a se- vere constraint on how the accuracy of gradient sensing increases with the number of cells in the sensor. A question remains as to whether the effect of the noise can be mitigated by the extra spatial averaging allowed in sensing by 2d and 3d cellular organoids. Here we show using computer simulations that, counterintuitively, such spatial averaging decreases gradient sensitiv- ity (while it increases concentration sensitivity). We explain the findings analytically and propose that a recently introduced Regional Excitation - Global Inhibition model of gradient sensing can overcome this limitation and use 2d or 3d spatial averaging to improve the sensing accuracy. Supported by NSF Grant PHY/1410978 and James S. McDonnell Foundation Grant # 220020321.
Zimbelman, Eloise G; Keefe, Robert F
2018-01-01
Real-time positioning on mobile devices using global navigation satellite system (GNSS) technology paired with radio frequency (RF) transmission (GNSS-RF) may help to improve safety on logging operations by increasing situational awareness. However, GNSS positional accuracy for ground workers in motion may be reduced by multipath error, satellite signal obstruction, or other factors. Radio propagation of GNSS locations may also be impacted due to line-of-sight (LOS) obstruction in remote, forested areas. The objective of this study was to characterize the effects of forest stand characteristics, topography, and other LOS obstructions on the GNSS accuracy and radio signal propagation quality of multiple Raveon Atlas PT GNSS-RF transponders functioning as a network in a range of forest conditions. Because most previous research with GNSS in forestry has focused on stationary units, we chose to analyze units in motion by evaluating the time-to-signal accuracy of geofence crossings in 21 randomly-selected stands on the University of Idaho Experimental Forest. Specifically, we studied the effects of forest stand characteristics, topography, and LOS obstructions on (1) the odds of missed GNSS-RF signals, (2) the root mean squared error (RMSE) of Atlas PTs, and (3) the time-to-signal accuracy of safety geofence crossings in forested environments. Mixed-effects models used to analyze the data showed that stand characteristics, topography, and obstructions in the LOS affected the odds of missed radio signals while stand variables alone affected RMSE. Both stand characteristics and topography affected the accuracy of geofence alerts.
2018-01-01
Real-time positioning on mobile devices using global navigation satellite system (GNSS) technology paired with radio frequency (RF) transmission (GNSS-RF) may help to improve safety on logging operations by increasing situational awareness. However, GNSS positional accuracy for ground workers in motion may be reduced by multipath error, satellite signal obstruction, or other factors. Radio propagation of GNSS locations may also be impacted due to line-of-sight (LOS) obstruction in remote, forested areas. The objective of this study was to characterize the effects of forest stand characteristics, topography, and other LOS obstructions on the GNSS accuracy and radio signal propagation quality of multiple Raveon Atlas PT GNSS-RF transponders functioning as a network in a range of forest conditions. Because most previous research with GNSS in forestry has focused on stationary units, we chose to analyze units in motion by evaluating the time-to-signal accuracy of geofence crossings in 21 randomly-selected stands on the University of Idaho Experimental Forest. Specifically, we studied the effects of forest stand characteristics, topography, and LOS obstructions on (1) the odds of missed GNSS-RF signals, (2) the root mean squared error (RMSE) of Atlas PTs, and (3) the time-to-signal accuracy of safety geofence crossings in forested environments. Mixed-effects models used to analyze the data showed that stand characteristics, topography, and obstructions in the LOS affected the odds of missed radio signals while stand variables alone affected RMSE. Both stand characteristics and topography affected the accuracy of geofence alerts. PMID:29324794
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Identification of differences between finite element analysis and experimental vibration data
NASA Technical Reports Server (NTRS)
Lawrence, C.
1986-01-01
An important problem that has emerged from combined analytical/experimental investigations is the task of identifying and quantifying the differences between results predicted by F.E. analysis and results obtained from experiment. The objective of this study is to extend and evaluate the procedure developed by Sidhu for correlation of linear F.E. and modal test data to include structures with viscous damping. The desirability of developing this procedure is that the differences are identified in terms of physical mass, damping, and stiffness parameters instead of in terms of frequencies and modes shapes. Since the differences are computed in terms of physical parameters, locations of modeling problems can be directly identified in the F.E. model. From simulated data it was determined that the accuracy of the computed differences increases as the number of experimentally measured modes included in the calculations is increased. When the number of experimental modes is at least equal to the number of translational degrees of freedom in the F.E. model both the location and magnitude of the differences can be computed very accurately. When the number of modes is less than this amount the location of the differences may be determined even though their magnitudes will be under estimated.
Experimental validation of a direct simulation by Monte Carlo molecular gas flow model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shufflebotham, P.K.; Bartel, T.J.; Berney, B.
1995-07-01
The Sandia direct simulation Monte Carlo (DSMC) molecular/transition gas flow simulation code has significant potential as a computer-aided design tool for the design of vacuum systems in low pressure plasma processing equipment. The purpose of this work was to verify the accuracy of this code through direct comparison to experiment. To test the DSMC model, a fully instrumented, axisymmetric vacuum test cell was constructed, and spatially resolved pressure measurements made in N{sub 2} at flows from 50 to 500 sccm. In a ``blind`` test, the DSMC code was used to model the experimental conditions directly, and the results compared tomore » the measurements. It was found that the model predicted all the experimental findings to a high degree of accuracy. Only one modeling issue was uncovered. The axisymmetric model showed localized low pressure spots along the axis next to surfaces. Although this artifact did not significantly alter the accuracy of the results, it did add noise to the axial data. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}« less
Modeling of high-strength concrete-filled FRP tube columns under cyclic load
NASA Astrophysics Data System (ADS)
Ong, Kee-Yen; Ma, Chau-Khun; Apandi, Nazirah Mohd; Awang, Abdullah Zawawi; Omar, Wahid
2018-05-01
The behavior of high-strength concrete (HSC) - filled fiber-reinforced-polymer (FRP) tubes (HSCFFTs) column subjected to cyclic lateral loading is presented in this paper. As the experimental study is costly and time consuming, a finite element analysis (FEA) is chosen for the study. Most of the previous studies have focused on examining the axial load behavior of HSCFFT column instead of seismic behavior. The seismic behavior of HSCFFT columns has been the main interest in the industry. The key objective of this research is to develop a reliable numerical non-linear FEA model to represent the seismic behavior of such column. A FEA model was developed using the Concrete Damaged Plasticity Model (CDPM) available in the finite element software package (ABAQUS). Comparisons between experimental results from previous research and the predicted results were made based on load versus displacement relationships and ultimate strength of the column. The results showed that the column increased in ductility and able to deform to a greater extent with the increase of the FRP confinement ratio. With the increase of confinement ratio, HSCFFT column achieved a higher moment resistance, thus indicated a higher failure strength in the column under cyclic lateral load. It was found that the proposed FEA model can regenerate the experimental results with adequate accuracy.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-04-01
Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-01-01
Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547
Utilizing Metalized Fabrics for Liquid and Rip Detection and Localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Stephen; Mahan, Cody; Kuhn, Michael J
2013-01-01
This paper proposes a novel technique for utilizing conductive textiles as a distributed sensor for detecting and localizing liquids (e.g., blood), rips (e.g., bullet holes), and potentially biosignals. The proposed technique is verified through both simulation and experimental measurements. Circuit theory is utilized to depict conductive fabric as a bounded, near-infinite grid of resistors. Solutions to the well-known infinite resistance grid problem are used to confirm the accuracy and validity of this modeling approach. Simulations allow for discontinuities to be placed within the resistor matrix to illustrate the effects of bullet holes within the fabric. A real-time experimental system wasmore » developed that uses a multiplexed Wheatstone bridge approach to reconstruct the resistor grid across the conductive fabric and detect liquids and rips. The resistor grid model is validated through a comparison of simulated and experimental results. Results suggest accuracy proportional to the electrode spacing in determining the presence and location of discontinuities in conductive fabric samples. Future work is focused on refining the experimental system to provide more accuracy in detecting and localizing events as well as developing a complete prototype that can be deployed for field testing. Potential applications include intelligent clothing, flexible, lightweight sensing systems, and combat wound detection.« less
d'plus: A program to calculate accuracy and bias measures from detection and discrimination data.
Macmillan, N A; Creelman, C D
1997-01-01
The program d'plus calculates accuracy (sensitivity) and response-bias parameters using Signal Detection Theory. Choice Theory, and 'nonparametric' models. is is appropriate for data from one-interval, two- and three-interval forced-choice, same different, ABX, and oddity experimental paradigms.
Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart
2017-06-01
This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.
Practical approach to subject-specific estimation of knee joint contact force.
Knarr, Brian A; Higginson, Jill S
2015-08-20
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data; however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models' predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Practical approach to subject-specific estimation of knee joint contact force
Knarr, Brian A.; Higginson, Jill S.
2015-01-01
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data, however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models’ predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. PMID:25952546
NASA Astrophysics Data System (ADS)
Duan, Zhenhao; Li, Dedong
2008-10-01
A model is developed for the calculation of coupled phase and aqueous species equilibrium in the H 2O-CO 2-NaCl-CaCO 3 system from 0 to 250 °C, 1 to 1000 bar with NaCl concentrations up to saturation of halite. The vapor-liquid-solid (calcite, halite) equilibrium together with the chemical equilibrium of H +, Na +, Ca 2+, CaHCO3+, Ca(OH) +, OH -, Cl -, HCO3-, CO32-, CO 2(aq) and CaCO 3(aq) in the aqueous liquid phase as a function of temperature, pressure, NaCl concentrations, CO 2(aq) concentrations can be calculated, with accuracy close to those of experiments in the stated T- P- m range, hence calcite solubility, CO 2 gas solubility, alkalinity and pH values can be accurately calculated. The merit and advantage of this model is its predictability, the model was generally not constructed by fitting experimental data. One of the focuses of this study is to predict calcite solubility, with accuracy consistent with the works in previous experimental studies. The resulted model reproduces the following: (1) as temperature increases, the calcite solubility decreases. For example, when temperature increases from 273 to 373 K, calcite solubility decreases by about 50%; (2) with the increase of pressure, calcite solubility increases. For example, at 373 K changing pressure from 10 to 500 bar may increase calcite solubility by as much as 30%; (3) dissolved CO 2 can increase calcite solubility substantially; (4) increasing concentration of NaCl up to 2 m will increase calcite solubility, but further increasing NaCl solubility beyond 2 m will decrease its solubility. The functionality of pH value, alkalinity, CO 2 gas solubility, and the concentrations of many aqueous species with temperature, pressure and NaCl (aq) concentrations can be found from the application of this model. Online calculation is made available on www.geochem-model.org/models/h2o_co2_nacl_caco3/calc.php.
Study on real-time force feedback for a master-slave interventional surgical robotic system.
Guo, Shuxiang; Wang, Yuan; Xiao, Nan; Li, Youxiang; Jiang, Yuhua
2018-04-13
In robot-assisted catheterization, haptic feedback is important, but is currently lacking. In addition, conventional interventional surgical robotic systems typically employ a master-slave architecture with an open-loop force feedback, which results in inaccurate control. We develop herein a novel real-time master-slave (RTMS) interventional surgical robotic system with a closed-loop force feedback that allows a surgeon to sense the true force during remote operation, provide adequate haptic feedback, and improve control accuracy in robot-assisted catheterization. As part of this system, we also design a unique master control handle that measures the true force felt by a surgeon, providing the basis for the closed-loop control of the entire system. We use theoretical and empirical methods to demonstrate that the proposed RTMS system provides a surgeon (using the master control handle) with a more accurate and realistic force sensation, which subsequently improves the precision of the master-slave manipulation. The experimental results show a substantial increase in the control accuracy of the force feedback and an increase in operational efficiency during surgery.
Advances in Homology Protein Structure Modeling
Xiang, Zhexin
2007-01-01
Homology modeling plays a central role in determining protein structure in the structural genomics project. The importance of homology modeling has been steadily increasing because of the large gap that exists between the overwhelming number of available protein sequences and experimentally solved protein structures, and also, more importantly, because of the increasing reliability and accuracy of the method. In fact, a protein sequence with over 30% identity to a known structure can often be predicted with an accuracy equivalent to a low-resolution X-ray structure. The recent advances in homology modeling, especially in detecting distant homologues, aligning sequences with template structures, modeling of loops and side chains, as well as detecting errors in a model, have contributed to reliable prediction of protein structure, which was not possible even several years ago. The ongoing efforts in solving protein structures, which can be time-consuming and often difficult, will continue to spur the development of a host of new computational methods that can fill in the gap and further contribute to understanding the relationship between protein structure and function. PMID:16787261
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.
Li, Chunyan; Wu, Pei-Ming; Wu, Zhizhen; Ahn, Chong H; LeDoux, David; Shutter, Lori A; Hartings, Jed A; Narayan, Raj K
2012-02-01
The injured brain is vulnerable to increases in temperature after severe head injury. Therefore, accurate and reliable measurement of brain temperature is important to optimize patient outcome. In this work, we have fabricated, optimized and characterized temperature sensors for use with a micromachined smart catheter for multimodal intracranial monitoring. Developed temperature sensors have resistance of 100.79 ± 1.19Ω and sensitivity of 67.95 mV/°C in the operating range from15-50°C, and time constant of 180 ms. Under the optimized excitation current of 500 μA, adequate signal-to-noise ratio was achieved without causing self-heating, and changes in immersion depth did not introduce clinically significant errors of measurements (<0.01°C). We evaluated the accuracy and long-term drift (5 days) of twenty temperature sensors in comparison to two types of commercial temperature probes (USB Reference Thermometer, NIST-traceable bulk probe with 0.05°C accuracy; and IT-21, type T type clinical microprobe with guaranteed 0.1°C accuracy) under controlled laboratory conditions. These in vitro experimental data showed that the temperature measurement performance of our sensors was accurate and reliable over the course of 5 days. The smart catheter temperature sensors provided accuracy and long-term stability comparable to those of commercial tissue-implantable microprobes, and therefore provide a means for temperature measurement in a microfabricated, multimodal cerebral monitoring device.
Monitoring Building Deformation with InSAR: Experiments and Validation.
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-12-20
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.
Prediction of customer behaviour analysis using classification algorithms
NASA Astrophysics Data System (ADS)
Raju, Siva Subramanian; Dhandayudam, Prabha
2018-04-01
Customer Relationship management plays a crucial role in analyzing of customer behavior patterns and their values with an enterprise. Analyzing of customer data can be efficient performed using various data mining techniques, with the goal of developing business strategies and to enhance the business. In this paper, three classification models (NB, J48, and MLPNN) are studied and evaluated for our experimental purpose. The performance measures of the three classifications are compared using three different parameters (accuracy, sensitivity, specificity) and experimental results expose J48 algorithm has better accuracy with compare to NB and MLPNN algorithm.
NASA Astrophysics Data System (ADS)
Allman, Derek; Reiter, Austin; Bell, Muyinatu
2018-02-01
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
Dai, Jiewen; Wu, Jinyang; Wang, Xudong; Yang, Xudong; Wu, Yunong; Xu, Bing; Shi, Jun; Yu, Hongbo; Cai, Min; Zhang, Wenbin; Zhang, Lei; Sun, Hao; Shen, Guofang; Zhang, Shilei
2016-01-01
Numerous problems regarding craniomaxillofacial navigation surgery are not well understood. In this study, we performed a double-center clinical study to quantitatively evaluate the characteristics of our navigation system and experience in craniomaxillofacial navigation surgery. Fifty-six patients with craniomaxillofacial disease were included and randomly divided into experimental (using our AccuNavi-A system) and control (using Strker system) groups to compare the surgical effects. The results revealed that the average pre-operative planning time was 32.32 mins vs 29.74 mins between the experimental and control group, respectively (p > 0.05). The average operative time was 295.61 mins vs 233.56 mins (p > 0.05). The point registration orientation accuracy was 0.83 mm vs 0.92 mm. The maximal average preoperative navigation orientation accuracy was 1.03 mm vs 1.17 mm. The maximal average persistent navigation orientation accuracy was 1.15 mm vs 0.09 mm. The maximal average navigation orientation accuracy after registration recovery was 1.15 mm vs 1.39 mm between the experimental and control group. All patients healed, and their function and profile improved. These findings demonstrate that although surgeons should consider the patients’ time and monetary costs, our qualified navigation surgery system and experience could offer an accurate guide during a variety of craniomaxillofacial surgeries. PMID:27305855
NASA Astrophysics Data System (ADS)
Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; de Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik
2016-11-01
Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study.
Linearization of the bradford protein assay.
Ernst, Orna; Zor, Tsaffrir
2010-04-12
Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.
Short-term wind speed prediction based on the wavelet transformation and Adaboost neural network
NASA Astrophysics Data System (ADS)
Hai, Zhou; Xiang, Zhu; Haijian, Shao; Ji, Wu
2018-03-01
The operation of the power grid will be affected inevitably with the increasing scale of wind farm due to the inherent randomness and uncertainty, so the accurate wind speed forecasting is critical for the stability of the grid operation. Typically, the traditional forecasting method does not take into account the frequency characteristics of wind speed, which cannot reflect the nature of the wind speed signal changes result from the low generality ability of the model structure. AdaBoost neural network in combination with the multi-resolution and multi-scale decomposition of wind speed is proposed to design the model structure in order to improve the forecasting accuracy and generality ability. The experimental evaluation using the data from a real wind farm in Jiangsu province is given to demonstrate the proposed strategy can improve the robust and accuracy of the forecasted variable.
A single camera roentgen stereophotogrammetry method for static displacement analysis.
Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G
2000-06-01
A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.
Effects of window size and shape on accuracy of subpixel centroid estimation of target images
NASA Technical Reports Server (NTRS)
Welch, Sharon S.
1993-01-01
A new algorithm is presented for increasing the accuracy of subpixel centroid estimation of (nearly) point target images in cases where the signal-to-noise ratio is low and the signal amplitude and shape vary from frame to frame. In the algorithm, the centroid is calculated over a data window that is matched in width to the image distribution. Fourier analysis is used to explain the dependency of the centroid estimate on the size of the data window, and simulation and experimental results are presented which demonstrate the effects of window size for two different noise models. The effects of window shape were also investigated for uniform and Gaussian-shaped windows. The new algorithm was developed to improve the dynamic range of a close-range photogrammetric tracking system that provides feedback for control of a large gap magnetic suspension system (LGMSS).
Urban Change Detection of Pingtan City based on Bi-temporal Remote Sensing Images
NASA Astrophysics Data System (ADS)
Degang, JIANG; Jinyan, XU; Yikang, GAO
2017-02-01
In this paper, a pair of SPOT 5-6 images with the resolution of 0.5m is selected. An object-oriented classification method is used to the two images and five classes of ground features were identified as man-made objects, farmland, forest, waterbody and unutilized land. An auxiliary ASTER GDEM was used to improve the classification accuracy. And the change detection based on the classification results was performed. Accuracy assessment was carried out finally. Consequently, satisfactory results were obtained. The results show that great changes of the Pingtan city have been detected as the expansion of the city area and the intensity increase of man-made buildings, roads and other infrastructures with the establishment of Pingtan comprehensive experimental zone. Wide range of open sea area along the island coast zones has been reclaimed for port and CBDs construction.
Three-gas detection system with IR optical sensor based on NDIR technology
NASA Astrophysics Data System (ADS)
Tan, Qiulin; Tang, Licheng; Yang, Mingliang; Xue, Chenyang; Zhang, Wendong; Liu, Jun; Xiong, Jijun
2015-11-01
In this paper, a three-gas detection system with a environmental parameter compensation method is proposed based on Non-dispersive infra-red (NDIR) technique, which can be applied to detect multi-gas (methane, carbon dioxide and carbon monoxide). In this system, an IR source and four single-channel pyroelectric sensors are integrated in the miniature optical gas chamber successfully. Inner wall of the chamber coated with Au film is designed as paraboloids. The infrared light is reflected twice before reaching to detectors, thus increasing optical path. Besides, a compensation method is presented to overcome the influence in variation of environment (ambient temperature, humidity and pressure), thus leading to improve the accuracy in gas detection. Experimental results demonstrated that detection ranges are 0-50,000 ppm for CH4, 0-44,500 ppm for CO, 0-48,000 ppm for CO2 and the accuracy is ±0.05%.
Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control
NASA Astrophysics Data System (ADS)
Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.
2005-01-01
This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.
Separating vegetation and soil temperature using airborne multiangular remote sensing image data
NASA Astrophysics Data System (ADS)
Liu, Qiang; Yan, Chunyan; Xiao, Qing; Yan, Guangjian; Fang, Li
2012-07-01
Land surface temperature (LST) is a key parameter in land process research. Many research efforts have been devoted to increase the accuracy of LST retrieval from remote sensing. However, because natural land surface is non-isothermal, component temperature is also required in applications such as evapo-transpiration (ET) modeling. This paper proposes a new algorithm to separately retrieve vegetation temperature and soil background temperature from multiangular thermal infrared (TIR) remote sensing data. The algorithm is based on the localized correlation between the visible/near-infrared (VNIR) bands and the TIR band. This method was tested on the airborne image data acquired during the Watershed Allied Telemetry Experimental Research (WATER) campaign. Preliminary validation indicates that the remote sensing-retrieved results can reflect the spatial and temporal trend of component temperatures. The accuracy is within three degrees while the difference between vegetation and soil temperature can be as large as twenty degrees.
Scattering Removal for Finger-Vein Image Restoration
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
Stochastic algorithm for simulating gas transport coefficients
NASA Astrophysics Data System (ADS)
Rudyak, V. Ya.; Lezhnev, E. V.
2018-02-01
The aim of this paper is to create a molecular algorithm for modeling the transport processes in gases that will be more efficient than molecular dynamics method. To this end, the dynamics of molecules are modeled stochastically. In a rarefied gas, it is sufficient to consider the evolution of molecules only in the velocity space, whereas for a dense gas it is necessary to model the dynamics of molecules also in the physical space. Adequate integral characteristics of the studied system are obtained by averaging over a sufficiently large number of independent phase trajectories. The efficiency of the proposed algorithm was demonstrated by modeling the coefficients of self-diffusion and the viscosity of several gases. It was shown that the accuracy comparable to the experimental one can be obtained on a relatively small number of molecules. The modeling accuracy increases with the growth of used number of molecules and phase trajectories.
NASA Astrophysics Data System (ADS)
Rezaei Ashtiani, Hamid Reza; Zarandooz, Roozbeh
2015-09-01
A 2D axisymmetric electro-thermo-mechanical finite element (FE) model is developed to investigate the effect of current intensity, welding time, and electrode tip diameter on temperature distributions and nugget size in resistance spot welding (RSW) process of Inconel 625 superalloy sheets using ABAQUS commercial software package. The coupled electro-thermal analysis and uncoupled thermal-mechanical analysis are used for modeling process. In order to improve accuracy of simulation, material properties including physical, thermal, and mechanical properties have been considered to be temperature dependent. The thickness and diameter of computed weld nuggets are compared with experimental results and good agreement is observed. So, FE model developed in this paper provides prediction of quality and shape of the weld nuggets and temperature distributions with variation of each process parameter, suitably. Utilizing this FE model assists in adjusting RSW parameters, so that expensive experimental process can be avoided. The results show that increasing welding time and current intensity lead to an increase in the nugget size and electrode indentation, whereas increasing electrode tip diameter decreases nugget size and electrode indentation.
NASA Astrophysics Data System (ADS)
Morrissey, Liam S.; Nakhla, Sam
2018-07-01
The effect of porosity on elastic modulus in low-porosity materials is investigated. First, several models used to predict the reduction in elastic modulus due to porosity are compared with a compilation of experimental data to determine their ranges of validity and accuracy. The overlapping solid spheres model is found to be most accurate with the experimental data and valid between 3 and 10 pct porosity. Next, a FEM is developed with the objective of demonstrating that a macroscale plate with a center hole can be used to model the effect of microscale porosity on elastic modulus. The FEM agrees best with the overlapping solid spheres model and shows higher accuracy with experimental data than the overlapping solid spheres model.
Correlation methods in optical metrology with state-of-the-art x-ray mirrors
NASA Astrophysics Data System (ADS)
Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.
2018-01-01
The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.
Stochastic localization of microswimmers by photon nudging.
Bregulla, Andreas P; Yang, Haw; Cichos, Frank
2014-07-22
Force-free trapping and steering of single photophoretically self-propelled Janus-type particles using a feedback mechanism is experimentally demonstrated. Realtime information on particle position and orientation is used to switch the self-propulsion mechanism of the particle optically. The orientational Brownian motion of the particle thereby provides the reorientation mechanism for the microswimmer. The particle size dependence of the photophoretic propulsion velocity reveals that photon nudging provides an increased position accuracy for decreasing particle radius. The explored steering mechanism is suitable for navigation in complex biological environments and in-depth studies of collective swimming effects.
Pile, Victoria; Lau, Jennifer Y F; Topor, Marta; Hedderly, Tammy; Robinson, Sally
2018-05-18
Aberrant interoceptive accuracy could contribute to the co-occurrence of anxiety and premonitory urge in chronic tic disorders (CTD). If it can be manipulated through intervention, it would offer a transdiagnostic treatment target for tics and anxiety. Interoceptive accuracy was first assessed consistent with previous protocols and then re-assessed following an instruction attempting to experimentally enhance awareness. The CTD group demonstrated lower interoceptive accuracy than controls but, importantly, this group difference was no longer significant following instruction. In the CTD group, better interoceptive accuracy was associated with higher anxiety and lower quality of life, but not with premonitory urge. Aberrant interoceptive accuracy may represent an underlying trait in CTD that can be manipulated, and relates to anxiety and quality of life.
Fuzzy method of recognition of high molecular substances in evidence-based biology
NASA Astrophysics Data System (ADS)
Olevskyi, V. I.; Smetanin, V. T.; Olevska, Yu. B.
2017-10-01
Nowadays modern requirements to achieving reliable results along with high quality of researches put mathematical analysis methods of results at the forefront. Because of this, evidence-based methods of processing experimental data have become increasingly popular in the biological sciences and medicine. Their basis is meta-analysis, a method of quantitative generalization of a large number of randomized trails contributing to a same special problem, which are often contradictory and performed by different authors. It allows identifying the most important trends and quantitative indicators of the data, verification of advanced hypotheses and discovering new effects in the population genotype. The existing methods of recognizing high molecular substances by gel electrophoresis of proteins under denaturing conditions are based on approximate methods for comparing the contrast of electrophoregrams with a standard solution of known substances. We propose a fuzzy method for modeling experimental data to increase the accuracy and validity of the findings of the detection of new proteins.
NASA Astrophysics Data System (ADS)
Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril
2017-12-01
The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.
Knowing What You Know: Improving Metacomprehension and Calibration Accuracy in Digital Text
ERIC Educational Resources Information Center
Reid, Alan J.; Morrison, Gary R.; Bol, Linda
2017-01-01
This paper presents results from an experimental study that examined embedded strategy prompts in digital text and their effects on calibration and metacomprehension accuracies. A sample population of 80 college undergraduates read a digital expository text on the basics of photography. The most robust treatment (mixed) read the text, generated a…
ERIC Educational Resources Information Center
Pena, Elizabeth D.; Gillam, Ronald B.; Malek, Melynn; Ruiz-Felter, Roxanna; Resendiz, Maria; Fiestas, Christine; Sabel, Tracy
2006-01-01
Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task. Purpose: The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest…
Zhang, Jiarui; Zhang, Yingjie; Chen, Bo
2017-12-20
The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.
An Improved BLE Indoor Localization with Kalman-Based Fusion: An Experimental Study
Röbesaat, Jenny; Zhang, Peilin; Abdelaal, Mohamed; Theel, Oliver
2017-01-01
Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter. PMID:28445421
NASA Astrophysics Data System (ADS)
Karedla, Narain; Chizhik, Anna M.; Stein, Simon C.; Ruhlandt, Daja; Gregor, Ingo; Chizhik, Alexey I.; Enderlein, Jörg
2018-05-01
Our paper presents the first theoretical and experimental study using single-molecule Metal-Induced Energy Transfer (smMIET) for localizing single fluorescent molecules in three dimensions. Metal-Induced Energy Transfer describes the resonant energy transfer from the excited state of a fluorescent emitter to surface plasmons in a metal nanostructure. This energy transfer is strongly distance-dependent and can be used to localize an emitter along one dimension. We have used Metal-Induced Energy Transfer in the past for localizing fluorescent emitters with nanometer accuracy along the optical axis of a microscope. The combination of smMIET with single-molecule localization based super-resolution microscopy that provides nanometer lateral localization accuracy offers the prospect of achieving isotropic nanometer localization accuracy in all three spatial dimensions. We give a thorough theoretical explanation and analysis of smMIET, describe its experimental requirements, also in its combination with lateral single-molecule localization techniques, and present first proof-of-principle experiments using dye molecules immobilized on top of a silica spacer, and of dye molecules embedded in thin polymer films.
Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1989-09-01
The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.
Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina
2013-01-01
In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.
Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine
Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin
2016-01-01
This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox. PMID:26848665
Composite Materials NDE Using Enhanced Leaky Lamb Wave Dispersion Data Acquisition Method
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph; Mal, Ajit; Lih, Shyh-Shiuh; Chang, Zensheu
1999-01-01
The leaky Lamb wave (LLW) technique is approaching a maturity level that is making it an attractive quantitative NDE tool for composites and bonded joints. Since it was first observed in 1982, the phenomenon has been studied extensively, particularly in composite materials. The wave is induced by oblique insonification using a pitch-catch arrangement and the plate wave modes are detected by identifying minima in the reflected spectra to obtain the dispersion data. The wave behavior in multi-orientation laminates has been well documented and corroborated experimentally with high accuracy. The sensitivity of the wave to the elastic constants of the material and to the boundary conditions led to the capability to measure the elastic properties of bonded joints. Recently, the authors significantly enhanced the LLW method's capability by increasing the speed of the data acquisition, the number of modes that can be identified and the accuracy of the data inversion. In spite of the theoretical and experimental progress, methods that employ oblique insonification of composites are still not being applied as standard industrial NDE methods. The authors investigated the issues that are hampering the transition of the LLW to industrial applications and identified 4 key issues. The current capability of the method and the nature of these issues are described in this paper.
NASA Astrophysics Data System (ADS)
Llewellin, E. W.
2010-02-01
LBflow is a flexible, extensible implementation of the lattice Boltzmann method, developed with geophysical applications in mind. The theoretical basis for LBflow, and its implementation, are presented in the companion paper, 'Part I'. This article covers the practical usage of LBflow and presents guidelines for obtaining optimal results from available computing power. The relationships among simulation resolution, accuracy, runtime and memory requirements are investigated in detail. Particular attention is paid to the origin, quantification and minimization of errors. LBflow is validated against analytical, numerical and experimental results for a range of three-dimensional flow geometries. The fluid conductance of prismatic pipes with various cross sections is calculated with LBflow and found to be in excellent agreement with published results. Simulated flow along sinusoidally constricted pipes gives good agreement with experimental data for a wide range of Reynolds number. The permeability of packs of spheres is determined and shown to be in excellent agreement with analytical results. The accuracy of internal flow patterns within the investigated geometries is also in excellent quantitative agreement with published data. The development of vortices within a sinusoidally constricted pipe with increasing Reynolds number is shown, demonstrating the insight that LBflow can offer as a 'virtual laboratory' for fluid flow.
A Boussinesq-scaled, pressure-Poisson water wave model
NASA Astrophysics Data System (ADS)
Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint
2015-02-01
Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.
Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine.
Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin
2016-02-02
This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox.
Influence of target reflection on three-dimensional range gated reconstruction.
Chua, Sing Yee; Wang, Xin; Guo, Ningqun; Tan, Ching Seong
2016-08-20
The range gated technique is a promising laser ranging method that is widely used in different fields such as surveillance, industry, and military. In a range gated system, a reflected laser pulse returned from the target scene contains key information for range reconstruction, which directly affects the system performance. Therefore, it is necessary to study the characteristics and effects of the target reflection factor. In this paper, theoretical and experimental analyses are performed to investigate the influence of target reflection on three-dimensional (3D) range gated reconstruction. Based on laser detection and ranging (LADAR) and bidirectional reflection distribution function (BRDF) theory, a 3D range gated reconstruction model is derived and the effect on range accuracy is analyzed from the perspectives of target surface reflectivity and angle of laser incidence. Our theoretical and experimental study shows that the range accuracy is proportional to the target surface reflectivity, but it decreases when the angle of incidence increases to adhere to the BRDF model. The presented findings establish a comprehensive understanding of target reflection in 3D range gated reconstruction, which is of interest to various applications such as target recognition and object modeling. This paper provides a reference for future improvement to perform accurate range compensation or correction.
Neutron density profile in the lunar subsurface produced by galactic cosmic rays
NASA Astrophysics Data System (ADS)
Ota, Shuya; Sihver, Lembit; Kobayashi, Shingo; Hasebe, Nobuyuki
Neutron production by galactic cosmic rays (GCR) in the lunar subsurface is very important when performing lunar and planetary nuclear spectroscopy and space dosimetry. Further im-provements to estimate the production with increased accuracy is therefore required. GCR, which is a main contributor to the neutron production in the lunar subsurface, consists of not only protons but also of heavy components such as He, C, N, O, and Fe. Because of that, it is important to precisely estimate the neutron production from such components for the lunar spectroscopy and space dosimetry. Therefore, the neutron production from GCR particles in-cluding heavy components in the lunar subsurface was simulated with the Particle and Heavy ion Transport code System (PHITS), using several heavy ion interaction models. This work presents PHITS simulations of the neutron density as a function of depth (neutron density profile) in the lunar subsurface and the results are compared with experimental data obtained by Apollo 17 Lunar Neutron Probe Experiment (LNPE). From our previous study, it has been found that the accuracy of the proton-induced neutron production models is the most influen-tial factor when performing precise calculations of neutron production in the lunar subsurface. Therefore, a benchmarking of proton-induced neutron production models against experimental data was performed to estimate and improve the precision of the calculations. It was found that the calculated neutron production using the best model of Cugnon Old (E < 3 GeV) and JAM (E > 3 GeV) gave up to 30% higher values than experimental results. Therefore, a high energy nuclear data file (JENDL-HE) was used instead of the Cugnon Old model at the energies below 3 GeV. Then, the calculated neutron density profile successfully reproduced the experimental data from LNPE within experimental errors of 15% (measurement) + 30% (systematic). In this presentation, we summarize and discuss our calculated results of neutron production in the lunar subsurface.
Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude
2014-10-01
The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chung, Kee-Choo; Park, Hwangseo
2016-11-01
The performance of the extended solvent-contact model has been addressed in the SAMPL5 blind prediction challenge for distribution coefficient (LogD) of drug-like molecules with respect to the cyclohexane/water partitioning system. All the atomic parameters defined for 41 atom types in the solvation free energy function were optimized by operating a standard genetic algorithm with respect to water and cyclohexane solvents. In the parameterizations for cyclohexane, the experimental solvation free energy (Δ G sol ) data of 15 molecules for 1-octanol were combined with those of 77 molecules for cyclohexane to construct a training set because Δ G sol values of the former were unavailable for cyclohexane in publicly accessible databases. Using this hybrid training set, we established the LogD prediction model with the correlation coefficient ( R), average error (AE), and root mean square error (RMSE) of 0.55, 1.53, and 3.03, respectively, for the comparison of experimental and computational results for 53 SAMPL5 molecules. The modest accuracy in LogD prediction could be attributed to the incomplete optimization of atomic solvation parameters for cyclohexane. With respect to 31 SAMPL5 molecules containing the atom types for which experimental reference data for Δ G sol were available for both water and cyclohexane, the accuracy in LogD prediction increased remarkably with the R, AE, and RMSE values of 0.82, 0.89, and 1.60, respectively. This significant enhancement in performance stemmed from the better optimization of atomic solvation parameters by limiting the element of training set to the molecules with experimental Δ G sol data for cyclohexane. Due to the simplicity in model building and to low computational cost for parameterizations, the extended solvent-contact model is anticipated to serve as a valuable computational tool for LogD prediction upon the enrichment of experimental Δ G sol data for organic solvents.
Role of optics in the accuracy of depth-from-defocus systems: comment.
Blendowske, Ralf
2007-10-01
In their paper "Role of optics in the accuracy of depth-from-defocus systems" [J. Opt. Soc. Am. A24, 967 (2007)] the authors Blayvas, Kimmel, and Rivlin discuss the effect of optics on the depth reconstruction accuracy. To this end they applied an approach in Fourier space. An alternative derivation of their result in the spatial domain, based on geometrical optics, is presented and compared with their outcome. A better agreement with experimental data is achieved if some unclarities are refined.
Arapiraca, A F C; Jonsson, Dan; Mohallem, J R
2011-12-28
We report an upgrade of the Dalton code to include post Born-Oppenheimer nuclear mass corrections in the calculations of (ro-)vibrational averages of molecular properties. These corrections are necessary to achieve an accuracy of 10(-4) debye in the calculations of isotopic dipole moments. Calculations on the self-consistent field level present this accuracy, while numerical instabilities compromise correlated calculations. Applications to HD, ethane, and ethylene isotopologues are implemented, all of them approaching the experimental values.
Investigation of practical and theoretical accuracy of wireless indoor-positioning system UBISENSE
NASA Astrophysics Data System (ADS)
Wozniak, Marek; Odziemczyk, Waldemar; Nagorski, Kamil
2013-04-01
The development of Real Time Locating Systems has become an important add-on to many existing location aware systems. While Global Navigation Satelite System has solved most of the outdoor problems, it fails to repeat this success indoors. Wireless indoor positioning systems have become very popular in recent years. One of them is UBISENSE system. This system requires to carry an identity tag that is detected by sensors, which typically use triangulation to determine location. This paper presents the results of the investigation of accuracy of tag position using precise geodetic measurements and geometric analysis. Experimental measurements were carried out on the field polygon using precise tacheometer TCRP 1201+ and complete equipment of Ubisense. Results of experimental measurements were analyzed and presented graphically using Surfer 8. The paper presents the results of the investigation the teoretical and practical positioning accuracy according to the various working conditions.
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
Recent developments in heterodyne laser interferometry at Harbin Institute of Technology
NASA Astrophysics Data System (ADS)
Hu, P. C.; Tan, J. B. B.; Yang, H. X. X.; Fu, H. J. J.; Wang, Q.
2013-01-01
In order to fulfill the requirements for high-resolution and high-precision heterodyne interferometric technologies and instruments, the laser interferometry group of HIT has developed some novel techniques for high-resolution and high-precision heterodyne interferometers, such as high accuracy laser frequency stabilization, dynamic sub-nanometer resolution phase interpolation and dynamic nonlinearity measurement. Based on a novel lock point correction method and an asymmetric thermal structure, the frequency stabilized laser achieves a long term stability of 1.2×10-8, and it can be steadily stabilized even in the air flowing up to 1 m/s. In order to achieve dynamic sub-nanometer resolution of laser heterodyne interferometers, a novel phase interpolation method based on digital delay line is proposed. Experimental results show that, the proposed 0.62 nm, phase interpolator built with a 64 multiple PLL and an 8-tap digital delay line achieves a static accuracy better than 0.31nm and a dynamic accuracy better than 0.62 nm over the velocity ranging from -2 m/s to 2 m/s. Meanwhile, an accuracy beam polarization measuring setup is proposed to check and ensure the light's polarization state of the dual frequency laser head, and a dynamic optical nonlinearity measuring setup is built to measure the optical nonlinearity of the heterodyne system accurately and quickly. Analysis and experimental results show that, the beam polarization measuring setup can achieve an accuracy of 0.03° in ellipticity angles and an accuracy of 0.04° in the non-orthogonality angle respectively, and the optical nonlinearity measuring setup can achieve an accuracy of 0.13°.
Accuracy of 3 different impression techniques for internal connection angulated implants.
Tsagkalidis, George; Tortopidis, Dimitrios; Mpikos, Pavlos; Kaisarlis, George; Koidis, Petros
2015-10-01
Making implant impressions with different angulations requires a more precise and time-consuming impression technique. The purpose of this in vitro study was to compare the accuracy of nonsplinted, splinted, and snap-fit impression techniques of internal connection implants with different angulations. An experimental device was used to allow a clinical simulation of impression making by means of open and closed tray techniques. Three different impression techniques (nonsplinted, acrylic-resin splinted, and indirect snap-fit) for 6 internal-connected implants at different angulations (0, 15, 25 degrees) were examined using polyether. Impression accuracy was evaluated by measuring the differences in 3-dimensional (3D) position deviations between the implant body/impression coping before the impression procedure and the coping/laboratory analog positioned within the impression, using a coordinate measuring machine. Data were analyzed by 2-way ANOVA. Means were compared with the least significant difference criterion at P<.05. Results showed that at 25 degrees of implant angulation, the highest accuracy was obtained with the splinted technique (mean ±SE: 0.39 ±0.05 mm) and the lowest with the snap-fit technique (0.85 ±0.09 mm); at 15 degrees of angulation, there were no significant differences among splinted (0.22 ±0.04 mm) and nonsplinted technique (0.15 ±0.02 mm) and the lowest accuracy obtained with the snap-fit technique (0.95 ±0.15 mm); and no significant differences were found between nonsplinted and splinted technique at 0 degrees of implant placement. Splinted impression technique exhibited a higher accuracy than the other techniques studied when increased implant angulations at 25 degrees were involved. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Effects of night work, sleep loss and time on task on simulated threat detection performance.
Basner, Mathias; Rubinstein, Joshua; Fomberstein, Kenneth M; Coble, Matthew C; Ecker, Adrian; Avinash, Deepa; Dinges, David F
2008-09-01
To investigate the effects of night work and sleep loss on a simulated luggage screening task (SLST) that mimicked the x-ray system used by airport luggage screeners. We developed more than 5,800 unique simulated x-ray images of luggage organized into 31 stimulus sets of 200 bags each. 25% of each set contained either a gun or a knife with low or high target difficulty. The 200-bag stimuli sets were then run on software that simulates an x-ray screening system (SLST). Signal detection analysis was used to obtain measures of hit rate (HR), false alarm rate (FAR), threat detection accuracy (A'), and response bias (B"(D)). Experimental laboratory study 24 healthy nonprofessional volunteers (13 women, mean age +/- SD = 29.9 +/- 6.5 years). Subjects performed the SLST every 2 h during a 5-day period that included a 35 h period of wakefulness that extended to night work and then another day work period after the night without sleep. Threat detection accuracy A' decreased significantly (P < 0.001) while FAR increased significantly (P < 0.001) during night work, while both A' (P = 0.001) and HR decreased (P = 0.008) during day work following sleep loss. There were prominent time-on-task effects on response bias B"(D) (P= 0.002) and response latency (P = 0.004), but accuracy A' was unaffected. Both HR and FAR increased significantly with increasing study duration (both P < 0.001), while response latency decreased significantly (P <0.001). This study provides the first systematic evidence that night work and sleep loss adversely affect the accuracy of detecting complex real world objects among high levels of background clutter. If the results can be replicated in professional screeners and real work environments, fatigue in luggage screening personnel may pose a threat for air traffic safety unless countermeasures for fatigue are deployed.
Numerical simulation of isolation of cancer cells in a microfluidic chip
NASA Astrophysics Data System (ADS)
Djukic, T.; Topalovic, M.; Filipovic, N.
2015-08-01
Cancer is a disease that is characterized by the uncontrolled increase of numbers of cells. Circulating tumour cells (CTCs) are separated from the primary tumor, circulate in the bloodstream and form metastases. Circulating tumor cells can be identified in the blood of a patient by taking a blood sample. Microfluidic chips are a new technique that is used to isolate these cells from the blood sample. In this paper a numerical model is presented that is able to simulate the motion of individual cells through a microfluidic chip. The proposed numerical model gives very valuable insight into the processes happening within a microfluidic chip. The accuracy of the proposed model is compared with experimental results. The experimental setup that is described in literature is used to create identical geometrical domains and define simulation parameters. A good agreement of experimental and numerical results demonstrates that the proposed model can be successfully used to simulate complex behaviour of CTCs inside microfluidic chips.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
Machine learning algorithms for the creation of clinical healthcare enterprise systems
NASA Astrophysics Data System (ADS)
Mandal, Indrajit
2017-10-01
Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.
Ding, Li-Ping; Shao, Peng; Lu, Cheng; Zhang, Fang-Hui; Ding, Lei; Yuan, Tao Li
2016-08-17
The structure and bonding nature of neutral and negatively charged BxAlyH2 (x + y = 7, 8, 9) clusters are investigated with the aid of previously published experimental photoelectron spectra combined with the present density functional theory calculations. The comparison between the experimental photoelectron spectra and theoretical simulated spectra helps to identify the ground state structures. The accuracy of the obtained ground state structures is further verified by calculating their adiabatic electron affinities and vertical detachment energies and comparing them against available experimental data. The results show that the structures of BxAlyH2 transform from three-dimensional to planar structures as the number of boron atoms increases. Moreover, boron atoms tend to bind together forming Bn units. The hydrogen atoms prefer to bind with boron atoms rather than aluminum atoms. The analyses of the molecular orbital on the ground state structures further support the abovementioned results.
Design of micro bending deformer for optical fiber weight sensor
NASA Astrophysics Data System (ADS)
Ula, R. K.; Hanto, D.; Waluyo, T. B.; Adinanta, H.; Widiyatmoko, B.
2017-04-01
The road damage due to excessive load is one of the causes of accidents on the road. A device to measure weight of the passing vehicles needs to be planted in the road structure. Thus, a weight sensor for the passing vehicles is required. In this study, we designed a weight sensor for a static load based on a power loss due to a micro bending on the optical fiber flanked on a board. The following main components are used i.e. LED 1310 nm as a light source, a multimode fiber optic as a transmission media and a power meter for measuring power loss. This works focuses on obtaining a suitable deformer design for weight sensor. Experimental results show that deformer design with 1.5 mm single side has level of accuracy as 4.32% while the design with 1.5 mm double side has level of accuracy as 98.77%. Increasing deformer length to 2.5 mm gives 71.18% level of accuracy for single side, and 76.94% level of accuracy for double side. Micro bending design with 1.5 mm double side has a high sensitivity and it is also capable of measuring load up to 100 kg. The sensor designed has been tested for measuring the weight of motor cycle, and it can be upgraded for measuring heavy vehicles.
Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest
NASA Astrophysics Data System (ADS)
Zhu, Xi; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Niemann, K. Olaf; Liu, Jing; Shi, Yifang; Wang, Tiejun
2018-02-01
Separation of foliar and woody materials using remotely sensed data is crucial for the accurate estimation of leaf area index (LAI) and woody biomass across forest stands. In this paper, we present a new method to accurately separate foliar and woody materials using terrestrial LiDAR point clouds obtained from ten test sites in a mixed forest in Bavarian Forest National Park, Germany. Firstly, we applied and compared an adaptive radius near-neighbor search algorithm with a fixed radius near-neighbor search method in order to obtain both radiometric and geometric features derived from terrestrial LiDAR point clouds. Secondly, we used a random forest machine learning algorithm to classify foliar and woody materials and examined the impact of understory and slope on the classification accuracy. An average overall accuracy of 84.4% (Kappa = 0.75) was achieved across all experimental plots. The adaptive radius near-neighbor search method outperformed the fixed radius near-neighbor search method. The classification accuracy was significantly higher when the combination of both radiometric and geometric features was utilized. The analysis showed that increasing slope and understory coverage had a significant negative effect on the overall classification accuracy. Our results suggest that the utilization of the adaptive radius near-neighbor search method coupling both radiometric and geometric features has the potential to accurately discriminate foliar and woody materials from terrestrial LiDAR data in a mixed natural forest.
Automatic Fault Characterization via Abnormality-Enhanced Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Laguna, I; de Supinski, B R
Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less
Application of mixsep software package: Performance verification of male-mixed DNA analysis
HU, NA; CONG, BIN; GAO, TAO; CHEN, YU; SHEN, JUNYI; LI, SHUJIN; MA, CHUNLING
2015-01-01
An experimental model of male-mixed DNA (n=297) was constructed according to the mixed DNA construction principle. This comprised the use of the Applied Biosystems (ABI) 7500 quantitative polymerase chain reaction system, with scientific validation of mixture proportion (Mx; root-mean-square error ≤0.02). Statistical analysis was performed on locus separation accuracy using mixsep, a DNA mixture separation R-package, and the analytical performance of mixsep was assessed by examining the data distribution pattern of different mixed gradients, short tandem repeat (STR) loci and mixed DNA types. The results showed that locus separation accuracy had a negative linear correlation with the mixed gradient (R2=−0.7121). With increasing mixed gradient imbalance, locus separation accuracy first increased and then decreased, with the highest value detected at a gradient of 1:3 (≥90%). The mixed gradient, which is the theoretical Mx, was one of the primary factors that influenced the success of mixed DNA analysis. Among the 16 STR loci detected by Identifiler®, the separation accuracy was relatively high (>88%) for loci D5S818, D8S1179 and FGA, whereas the median separation accuracy value was lowest for the D7S820 locus. STR loci with relatively large numbers of allelic drop-out (ADO; >15) were all located in the yellow and red channels, including loci D18S51, D19S433, FGA, TPOX and vWA. These five loci featured low allele peak heights, which was consistent with the low sensitivity of the ABI 3130xl Genetic Analyzer to yellow and red fluorescence. The locus separation accuracy of the mixsep package was substantially different with and without the inclusion of ADO loci; inclusion of ADO significantly reduced the analytical performance of the mixsep package, which was consistent with the lack of an ADO functional module in this software. The present study demonstrated that the mixsep software had a number of advantages and was recommended for analysis of mixed DNA. This software was easy to operate and produced understandable results with a degree of controllability. PMID:25936428
A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers
NASA Astrophysics Data System (ADS)
Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang
1990-02-01
In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.
[Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.
Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning
2016-05-01
Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.
Lee, Sang Cheol; Hong, Sung Kyung
2016-12-11
This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter.
A Blade Tip Timing Method Based on a Microwave Sensor
Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie
2017-01-01
Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy. PMID:28492469
A Blade Tip Timing Method Based on a Microwave Sensor.
Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie
2017-05-11
Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.
Lee, Sang Cheol; Hong, Sung Kyung
2016-01-01
This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter. PMID:27973429
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center
NASA Astrophysics Data System (ADS)
Tseng, Pai-Chung; Chen, Shen-Len
The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
Hur, Taeho; Bang, Jaehun; Kim, Dohyeong; Banos, Oresti; Lee, Sungyoung
2017-04-23
Activity recognition through smartphones has been proposed for a variety of applications. The orientation of the smartphone has a significant effect on the recognition accuracy; thus, researchers generally propose using features invariant to orientation or displacement to achieve this goal. However, those features reduce the capability of the recognition system to differentiate among some specific commuting activities (e.g., bus and subway) that normally involve similar postures. In this work, we recognize those activities by analyzing the vibrations of the vehicle in which the user is traveling. We extract natural vibration features of buses and subways to distinguish between them and address the confusion that can arise because the activities are both static in terms of user movement. We use the gyroscope to fix the accelerometer to the direction of gravity to achieve an orientation-free use of the sensor. We also propose a correction algorithm to increase the accuracy when used in free living conditions and a battery saving algorithm to consume less power without reducing performance. Our experimental results show that the proposed system can adequately recognize each activity, yielding better accuracy in the detection of bus and subway activities than existing methods.
Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth
NASA Astrophysics Data System (ADS)
Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.
2017-09-01
With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.
Sidek, Khairul; Khali, Ibrahim
2012-01-01
In this paper, a person identification mechanism implemented with Cardioid based graph using electrocardiogram (ECG) is presented. Cardioid based graph has given a reasonably good classification accuracy in terms of differentiating between individuals. However, the current feature extraction method using Euclidean distance could be further improved by using Mahalanobis distance measurement producing extracted coefficients which takes into account the correlations of the data set. Identification is then done by applying these extracted features to Radial Basis Function Network. A total of 30 ECG data from MITBIH Normal Sinus Rhythm database (NSRDB) and MITBIH Arrhythmia database (MITDB) were used for development and evaluation purposes. Our experimentation results suggest that the proposed feature extraction method has significantly increased the classification performance of subjects in both databases with accuracy from 97.50% to 99.80% in NSRDB and 96.50% to 99.40% in MITDB. High sensitivity, specificity and positive predictive value of 99.17%, 99.91% and 99.23% for NSRDB and 99.30%, 99.90% and 99.40% for MITDB also validates the proposed method. This result also indicates that the right feature extraction technique plays a vital role in determining the persistency of the classification accuracy for Cardioid based person identification mechanism.
Hur, Taeho; Bang, Jaehun; Kim, Dohyeong; Banos, Oresti; Lee, Sungyoung
2017-01-01
Activity recognition through smartphones has been proposed for a variety of applications. The orientation of the smartphone has a significant effect on the recognition accuracy; thus, researchers generally propose using features invariant to orientation or displacement to achieve this goal. However, those features reduce the capability of the recognition system to differentiate among some specific commuting activities (e.g., bus and subway) that normally involve similar postures. In this work, we recognize those activities by analyzing the vibrations of the vehicle in which the user is traveling. We extract natural vibration features of buses and subways to distinguish between them and address the confusion that can arise because the activities are both static in terms of user movement. We use the gyroscope to fix the accelerometer to the direction of gravity to achieve an orientation-free use of the sensor. We also propose a correction algorithm to increase the accuracy when used in free living conditions and a battery saving algorithm to consume less power without reducing performance. Our experimental results show that the proposed system can adequately recognize each activity, yielding better accuracy in the detection of bus and subway activities than existing methods. PMID:28441743
NASA Technical Reports Server (NTRS)
Goldman, L. J.; Seasholtz, R. G.
1982-01-01
Experimental measurements of the velocity components in the blade to blade (axial tangential) plane were obtained with an axial flow turbine stator passage and were compared with calculations from three turbomachinery computer programs. The theoretical results were calculated from a quasi three dimensional inviscid code, a three dimensional inviscid code, and a three dimensional viscous code. Parameter estimation techniques and a particle dynamics calculation were used to assess the accuracy of the laser measurements, which allow a rational basis for comparison of the experimenal and theoretical results. The general agreement of the experimental data with the results from the two inviscid computer codes indicates the usefulness of these calculation procedures for turbomachinery blading. The comparison with the viscous code, while generally reasonable, was not as good as for the inviscid codes.
Effective wavefront aberration measurement of spectacle lenses in as-worn status
NASA Astrophysics Data System (ADS)
Jia, Zhigang; Xu, Kai; Fang, Fengzhou
2018-04-01
An effective wavefront aberration analysis method for measuring spectacle lenses in as-worn status was proposed and verified using an experimental apparatus based on an eye rotation model. Two strategies were employed to improve the accuracy of measurement of the effective wavefront aberrations on the corneal sphere. The influences of three as-worn parameters, the vertex distance, pantoscopic angle, and face form angle, together with the eye rotation and corresponding incident beams, were objectively and quantitatively obtained. The experimental measurements of spherical single vision and freeform progressive addition lenses demonstrate the accuracy and validity of the proposed method and experimental apparatus, which provide a potential means of achieving supernormal vision correction with customization and personalization in optimizing the as-worn status-based design of spectacle lenses and evaluating their manufacturing and imaging qualities.
Evaluation of in silico tools to predict the skin sensitization potential of chemicals.
Verheyen, G R; Braeken, E; Van Deun, K; Van Miert, S
2017-01-01
Public domain and commercial in silico tools were compared for their performance in predicting the skin sensitization potential of chemicals. The packages were either statistical based (Vega, CASE Ultra) or rule based (OECD Toolbox, Toxtree, Derek Nexus). In practice, several of these in silico tools are used in gap filling and read-across, but here their use was limited to make predictions based on presence/absence of structural features associated to sensitization. The top 400 ranking substances of the ATSDR 2011 Priority List of Hazardous Substances were selected as a starting point. Experimental information was identified for 160 chemically diverse substances (82 positive and 78 negative). The prediction for skin sensitization potential was compared with the experimental data. Rule-based tools perform slightly better, with accuracies ranging from 0.6 (OECD Toolbox) to 0.78 (Derek Nexus), compared with statistical tools that had accuracies ranging from 0.48 (Vega) to 0.73 (CASE Ultra - LLNA weak model). Combining models increased the performance, with positive and negative predictive values up to 80% and 84%, respectively. However, the number of substances that were predicted positive or negative for skin sensitization in both models was low. Adding more substances to the dataset will increase the confidence in the conclusions reached. The insights obtained in this evaluation are incorporated in a web database www.asopus.weebly.com that provides a potential end user context for the scope and performance of different in silico tools with respect to a common dataset of curated skin sensitization data.
Limmer, Jan; Kornhuber, Johannes; Martin, Alexandra
2015-10-01
While current theories on perception of interoceptive signals suggest impaired interoceptive processing in psychiatric disorders such as panic disorder or depression, heart-rate (HR) interoceptive accuracy (IAc) of panic patients under resting conditions is superior to that of healthy controls. Thus, in this study, we chose to assess further physiological parameters and comorbid depression in order to get information on how these potentially conflicting findings are linked together. We used a quasi-experimental laboratory design which included multi-parametric physiological data collection of 40 panic subjects and 53 matched no-panic controls, as well as experimental induction of stress and relaxation over a time-course. Stress reactivity, interoceptive awareness (IAw; from the Body Perception Questionnaire (BPQ)) and IAc (as correlation between self-estimation and physiological data) were major outcome variables. Self-estimation of bioparametrical change was measured via numeric rating scales. Panic subjects had stronger HR-reaction and more accurate HR-interoception. Concurrently, though, their IAc of skin conductance level, pulse amplitude and breathing amplitude was significantly lower than that of the control group. Interestingly, comorbid depression was found to be associated with increased IAw but attenuated IAc. Demand characteristics and a categorical approach to panic confine the results. The potentially conflicting findings coalesce, as panic was associated with an increase of the ability to perceive the fear-related parameter and a simultaneous decrease of the ability to perceive other parameters. The superordinate integration of afferent signals might be impaired. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Hutsell, Blake A; Banks, Matthew L
2015-08-15
Working memory is a domain of 'executive function.' Delayed nonmatching-to-sample (DNMTS) procedures are commonly used to examine working memory in both human laboratory and preclinical studies. The aim was to develop an automated DNMTS procedure maintained by food pellets in rhesus monkeys using a touch-sensitive screen attached to the housing chamber. Specifically, the DNMTS procedure was a 2-stimulus, 2-choice recognition memory task employing unidimensional discriminative stimuli and randomized delay interval presentations. DNMTS maintained a delay-dependent decrease in discriminability that was independent of the retention interval distribution. Eliminating reinforcer availability during a single delay session or providing food pellets before the session did not systematically alter accuracy, but did reduce total choices. Increasing the intertrial interval enhanced accuracy at short delays. Acute Δ(9)-THC pretreatment produced delay interval-dependent changes in the forgetting function at doses that did not alter total choices. Acute methylphenidate pretreatment only decreased total choices. All monkeys were trained to perform NMTS at the 1s training delay within 60 days of initiating operant touch training. Furthermore, forgetting functions were reliably delay interval-dependent and stable over the experimental period (∼6 months). Consistent with previous studies, increasing the intertrial interval improved DNMTS performance, whereas Δ(9)-THC disrupted DNMTS performance independent of changes in total choices. Overall, the touchscreen-based DNMTS procedure described provides an efficient method for training and testing experimental manipulations on working memory in unrestrained rhesus monkeys. Copyright © 2015 Elsevier B.V. All rights reserved.
Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J
2014-01-01
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important implications in understanding and predicting the effects of temperature on the dynamics of populations. PMID:25558365
Impact of Personal Relevance and Contextualization on Word-Picture Matching by People with Aphasia
ERIC Educational Resources Information Center
McKelvey, Miechelle L.; Hux, Karen; Dietz, Aimee; Beukelman, David R.
2010-01-01
Purpose: To determine the effect of personal relevance and contextualization of images on the preferences and word-picture matching accuracy of people with severe aphasia. Method: Eight adults with aphasia performed 2 experimental tasks to reveal their preferences and accuracy during word-picture matching. The researchers used 3 types of visual…
ERIC Educational Resources Information Center
Szadokierski, Isadora Elisabeth
2012-01-01
The current study used the Learning Hierarchy/Instructional Hierarchy (LH/IH) to predict intervention effectiveness based on the reading skills of students who are developing reading fluency. Pre-intervention reading accuracy and rate were assessed for 49 second and third grade participants who then participated in a brief experimental analysis…
Prediction Accuracy: The Role of Feedback in 6th Graders' Recall Predictions
ERIC Educational Resources Information Center
Al-Harthy, Ibrahim S.
2016-01-01
The current study focused on the role of feedback on students' prediction accuracy (calibration). This phenomenon has been widely studied, but questions remain about how best to improve it. In the current investigation, fifty-seven students from sixth grade were randomly assigned to control and experimental groups. Thirty pictures were chosen from…
Wu, Ming-Jui; Chen, Wei-Ling; Kan, Chung-Dann; Yu, Fan-Ming; Wang, Su-Chin; Lin, Hsiu-Hui; Lin, Chia-Hung
2015-12-01
In physical examinations, hemodialysis access stenosis leading to dysfunction occurs at the venous anastomosis site or the outflow vein. Information from the inflow stenosis, such as blood pressure, pressure drop, and flow resistance increases, allows dysfunction screening from the stage of early clots and thrombosis to the progression of outflow stenosis. Therefore, this study proposes dysfunction screening model in experimental arteriovenous grafts (AVGs) using the fractional-order extractor (FOE) and the color relation analysis (CRA). A Sprott system was designed using an FOE to quantify the differences in transverse vibration pressures between the inflow and outflow sites of an AVG. Experimental analysis revealed that the degree of stenosis (DOS) correlated with an increase in fractional-order dynamic errors (FODEs). Exponential regression was used to fit a non-linear curve and can be used to quantify the relationship between the FODEs and DOS (R (2) = 0.8064). The specific ranges were used to evaluate the stenosis degree, such as DOS: <50, 50-80, and >80%. A CRA-based screening method was derived from the hue angle-saturation-value color model, which describes perceptual color relationships for the DOS. It has a flexibility inference manner with color visualization to represent the different stenosis degrees, which has average accuracy >90% superior to the traditional methods. This in vitro experimental study demonstrated that the proposed model can be used for dysfunction screening in stenotic AVGs.
Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias
2014-11-01
Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.
NASA Astrophysics Data System (ADS)
Aumann, T.; Bertulani, C. A.; Schindler, F.; Typel, S.
2017-12-01
An experimentally constrained equation of state of neutron-rich matter is fundamental for the physics of nuclei and the astrophysics of neutron stars, mergers, core-collapse supernova explosions, and the synthesis of heavy elements. To this end, we investigate the potential of constraining the density dependence of the symmetry energy close to saturation density through measurements of neutron-removal cross sections in high-energy nuclear collisions of 0.4 to 1 GeV /nucleon . We show that the sensitivity of the total neutron-removal cross section is high enough so that the required accuracy can be reached experimentally with the recent developments of new detection techniques. We quantify two crucial points to minimize the model dependence of the approach and to reach the required accuracy: the contribution to the cross section from inelastic scattering has to be measured separately in order to allow a direct comparison of experimental cross sections to theoretical cross sections based on density functional theory and eikonal theory. The accuracy of the reaction model should be investigated and quantified by the energy and target dependence of various nucleon-removal cross sections. Our calculations explore the dependence of neutron-removal cross sections on the neutron skin of medium-heavy neutron-rich nuclei, and we demonstrate that the slope parameter L of the symmetry energy could be constrained down to ±10 MeV by such a measurement, with a 2% accuracy of the measured and calculated cross sections.
Validating LES for Jet Aeroacoustics
NASA Technical Reports Server (NTRS)
Bridges, James
2011-01-01
Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that result in having dreams come true. This paper primarily addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. It also addresses the latter problem in discussing what are relevant measures critical for aeroacoustics that should be used in validating LES codes. These new diagnostic techniques deliver measurements and flow statistics of increasing sophistication and capability, but what of their accuracy? And what are the measures to be used in validation? This paper argues that the issue of accuracy be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it is argued that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound.
Time averaging of NMR chemical shifts in the MLF peptide in the solid state.
De Gortari, Itzam; Portella, Guillem; Salvatella, Xavier; Bajaj, Vikram S; van der Wel, Patrick C A; Yates, Jonathan R; Segall, Matthew D; Pickard, Chris J; Payne, Mike C; Vendruscolo, Michele
2010-05-05
Since experimental measurements of NMR chemical shifts provide time and ensemble averaged values, we investigated how these effects should be included when chemical shifts are computed using density functional theory (DFT). We measured the chemical shifts of the N-formyl-L-methionyl-L-leucyl-L-phenylalanine-OMe (MLF) peptide in the solid state, and then used the X-ray structure to calculate the (13)C chemical shifts using the gauge including projector augmented wave (GIPAW) method, which accounts for the periodic nature of the crystal structure, obtaining an overall accuracy of 4.2 ppm. In order to understand the origin of the difference between experimental and calculated chemical shifts, we carried out first-principles molecular dynamics simulations to characterize the molecular motion of the MLF peptide on the picosecond time scale. We found that (13)C chemical shifts experience very rapid fluctuations of more than 20 ppm that are averaged out over less than 200 fs. Taking account of these fluctuations in the calculation of the chemical shifts resulted in an accuracy of 3.3 ppm. To investigate the effects of averaging over longer time scales we sampled the rotameric states populated by the MLF peptides in the solid state by performing a total of 5 micros classical molecular dynamics simulations. By averaging the chemical shifts over these rotameric states, we increased the accuracy of the chemical shift calculations to 3.0 ppm, with less than 1 ppm error in 10 out of 22 cases. These results suggests that better DFT-based predictions of chemical shifts of peptides and proteins will be achieved by developing improved computational strategies capable of taking into account the averaging process up to the millisecond time scale on which the chemical shift measurements report.
Rolf, Megan M; Taylor, Jeremy F; Schnabel, Robert D; McKay, Stephanie D; McClure, Matthew C; Northcutt, Sally L; Kerley, Monty S; Weaber, Robert L
2010-04-19
Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain) recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
Size Dependent Mechanical Properties of Monolayer Densely Arranged Polystyrene Nanospheres.
Huang, Peng; Zhang, Lijing; Yan, Qingfeng; Guo, Dan; Xie, Guoxin
2016-12-13
In contrast to macroscopic materials, the mechanical properties of polymer nanospheres show fascinating scientific and application values. However, the experimental measurements of individual nanospheres and quantitative analysis of theoretical mechanisms remain less well performed and understood. We provide a highly efficient and accurate method with monolayer densely arranged honeycomb polystyrene (PS) nanospheres for the quantitatively mechanical characterization of individual nanospheres on the basis of atomic force microscopy (AFM) nanoindentation. The efficiency is improved by 1-2 orders, and the accuracy is also enhanced almost by half-order. The elastic modulus measured in the experiments increases with decreasing radius to the smallest nanospheres (25-35 nm in radius). A core-shell model is introduced to predict the size dependent elasticity of PS nanospheres, and the theoretical prediction agrees reasonably well with the experimental results and also shows a peak modulus value.
Research on electrodischarge drilling of polycrystalline diamond with increased gap voltage
NASA Astrophysics Data System (ADS)
Skoczypiec, Sebastian; Bizoń, Wojciech; Żyra, Agnieszka
2018-05-01
This paper presents an experimental investigation of the machining characteristics of polycrystalline diamond (PCD). Machining of PCD by conventional technologies is not an effective solution. Due to presence of cobalt this material can be machined by application of electrical discharges. On the other side, electrical conductivity of PCD is on the limit of electrodischarge machining (EDM) possibilities. Proposed paper reports experimental investigation on electrodischarge drilling of PCD samples. The test were carried out with application on of high-voltage (up to 550 V) pulse power unit for two kinds of dielectrics: carbon based (Exxsol D80) and de-ionized water. As output parameters machining accuracy (side gap), material removal rate were selected. Also, based on SEM photographs and energy dispersive X-ray spectroscopy (EDS) analysis, a qualitative evaluation of the obtained results was presented.
Identification of facilitators and barriers to residents' use of a clinical reasoning tool.
DiNardo, Deborah; Tilstra, Sarah; McNeil, Melissa; Follansbee, William; Zimmer, Shanta; Farris, Coreen; Barnato, Amber E
2018-03-28
While there is some experimental evidence to support the use of cognitive forcing strategies to reduce diagnostic error in residents, the potential usability of such strategies in the clinical setting has not been explored. We sought to test the effect of a clinical reasoning tool on diagnostic accuracy and to obtain feedback on its usability and acceptability. We conducted a randomized behavioral experiment testing the effect of this tool on diagnostic accuracy on written cases among post-graduate 3 (PGY-3) residents at a single internal medical residency program in 2014. Residents completed written clinical cases in a proctored setting with and without prompts to use the tool. The tool encouraged reflection on concordant and discordant aspects of each case. We used random effects regression to assess the effect of the tool on diagnostic accuracy of the independent case sets, controlling for case complexity. We then conducted audiotaped structured focus group debriefing sessions and reviewed the tapes for facilitators and barriers to use of the tool. Of 51 eligible PGY-3 residents, 34 (67%) participated in the study. The average diagnostic accuracy increased from 52% to 60% with the tool, a difference that just met the test for statistical significance in adjusted analyses (p=0.05). Residents reported that the tool was generally acceptable and understandable but did not recognize its utility for use with simple cases, suggesting the presence of overconfidence bias. A clinical reasoning tool improved residents' diagnostic accuracy on written cases. Overconfidence bias is a potential barrier to its use in the clinical setting.
Abawajy, Jemal; Kelarev, Andrei; Chowdhury, Morshed U; Jelinek, Herbert F
2016-01-01
Blood biochemistry attributes form an important class of tests, routinely collected several times per year for many patients with diabetes. The objective of this study is to investigate the role of blood biochemistry for improving the predictive accuracy of the diagnosis of cardiac autonomic neuropathy (CAN) progression. Blood biochemistry contributes to CAN, and so it is a causative factor that can provide additional power for the diagnosis of CAN especially in the absence of a complete set of Ewing tests. We introduce automated iterative multitier ensembles (AIME) and investigate their performance in comparison to base classifiers and standard ensemble classifiers for blood biochemistry attributes. AIME incorporate diverse ensembles into several tiers simultaneously and combine them into one automatically generated integrated system so that one ensemble acts as an integral part of another ensemble. We carried out extensive experimental analysis using large datasets from the diabetes screening research initiative (DiScRi) project. The results of our experiments show that several blood biochemistry attributes can be used to supplement the Ewing battery for the detection of CAN in situations where one or more of the Ewing tests cannot be completed because of the individual difficulties faced by each patient in performing the tests. The results show that AIME provide higher accuracy as a multitier CAN classification paradigm. The best predictive accuracy of 99.57% has been obtained by the AIME combining decorate on top tier with bagging on middle tier based on random forest. Practitioners can use these findings to increase the accuracy of CAN diagnosis.
Monitoring Building Deformation with InSAR: Experiments and Validation
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-01-01
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated. PMID:27999403
On the Structure of Neuronal Population Activity under Fluctuations in Attentional State
Denfield, George H.; Bethge, Matthias; Tolias, Andreas S.
2016-01-01
Attention is commonly thought to improve behavioral performance by increasing response gain and suppressing shared variability in neuronal populations. However, both the focus and the strength of attention are likely to vary from one experimental trial to the next, thereby inducing response variability unknown to the experimenter. Here we study analytically how fluctuations in attentional state affect the structure of population responses in a simple model of spatial and feature attention. In our model, attention acts on the neural response exclusively by modulating each neuron's gain. Neurons are conditionally independent given the stimulus and the attentional gain, and correlated activity arises only from trial-to-trial fluctuations of the attentional state, which are unknown to the experimenter. We find that this simple model can readily explain many aspects of neural response modulation under attention, such as increased response gain, reduced individual and shared variability, increased correlations with firing rates, limited range correlations, and differential correlations. We therefore suggest that attention may act primarily by increasing response gain of individual neurons without affecting their correlation structure. The experimentally observed reduction in correlations may instead result from reduced variability of the attentional gain when a stimulus is attended. Moreover, we show that attentional gain fluctuations, even if unknown to a downstream readout, do not impair the readout accuracy despite inducing limited-range correlations, whereas fluctuations of the attended feature can in principle limit behavioral performance. SIGNIFICANCE STATEMENT Covert attention is one of the most widely studied examples of top-down modulation of neural activity in the visual system. Recent studies argue that attention improves behavioral performance by shaping of the noise distribution to suppress shared variability rather than by increasing response gain. Our work shows, however, that latent, trial-to-trial fluctuations of the focus and strength of attention lead to shared variability that is highly consistent with known experimental observations. Interestingly, fluctuations in the strength of attention do not affect coding performance. As a consequence, the experimentally observed changes in response variability may not be a mechanism of attention, but rather a side effect of attentional allocation strategies in different behavioral contexts. PMID:26843656
Passini, Elisa; Britton, Oliver J; Lu, Hua Rong; Rohrbacher, Jutta; Hermans, An N; Gallacher, David J; Greig, Robert J H; Bueno-Orovio, Alfonso; Rodriguez, Blanca
2017-01-01
Early prediction of cardiotoxicity is critical for drug development. Current animal models raise ethical and translational questions, and have limited accuracy in clinical risk prediction. Human-based computer models constitute a fast, cheap and potentially effective alternative to experimental assays, also facilitating translation to human. Key challenges include consideration of inter-cellular variability in drug responses and integration of computational and experimental methods in safety pharmacology. Our aim is to evaluate the ability of in silico drug trials in populations of human action potential (AP) models to predict clinical risk of drug-induced arrhythmias based on ion channel information, and to compare simulation results against experimental assays commonly used for drug testing. A control population of 1,213 human ventricular AP models in agreement with experimental recordings was constructed. In silico drug trials were performed for 62 reference compounds at multiple concentrations, using pore-block drug models (IC 50 /Hill coefficient). Drug-induced changes in AP biomarkers were quantified, together with occurrence of repolarization/depolarization abnormalities. Simulation results were used to predict clinical risk based on reports of Torsade de Pointes arrhythmias, and further evaluated in a subset of compounds through comparison with electrocardiograms from rabbit wedge preparations and Ca 2+ -transient recordings in human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs). Drug-induced changes in silico vary in magnitude depending on the specific ionic profile of each model in the population, thus allowing to identify cell sub-populations at higher risk of developing abnormal AP phenotypes. Models with low repolarization reserve (increased Ca 2+ /late Na + currents and Na + /Ca 2+ -exchanger, reduced Na + /K + -pump) are highly vulnerable to drug-induced repolarization abnormalities, while those with reduced inward current density (fast/late Na + and Ca 2+ currents) exhibit high susceptibility to depolarization abnormalities. Repolarization abnormalities in silico predict clinical risk for all compounds with 89% accuracy. Drug-induced changes in biomarkers are in overall agreement across different assays: in silico AP duration changes reflect the ones observed in rabbit QT interval and hiPS-CMs Ca 2+ -transient, and simulated upstroke velocity captures variations in rabbit QRS complex. Our results demonstrate that human in silico drug trials constitute a powerful methodology for prediction of clinical pro-arrhythmic cardiotoxicity, ready for integration in the existing drug safety assessment pipelines.
Design of measuring system for wire diameter based on sub-pixel edge detection algorithm
NASA Astrophysics Data System (ADS)
Chen, Yudong; Zhou, Wang
2016-09-01
Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.
Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.
Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M
2017-01-01
Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).
Roles of an Upper-Body Compression Garment on Athletic Performances.
Hooper, David R; Dulkis, Lexie L; Secola, Paul J; Holtzum, Gabriel; Harper, Sean P; Kalkowski, Ryan J; Comstock, Brett A; Szivak, Tunde K; Flanagan, Shawn D; Looney, David P; DuPont, William H; Maresh, Carl M; Volek, Jeff S; Culley, Kevin P; Kraemer, William J
2015-09-01
Compression garments (CGs) have been previously shown to enhance proprioception; however, this benefit has not been previously shown to transfer to improved performance in sports skills. The purpose of this study was to assess whether enhanced proprioception and comfort can be manifested in improved sports performance of high-level athletes. Eleven Division I collegiate pitchers (age: 21.0 ± 2.9 years; height: 181.0 ± 4.6 cm; weight: 89.0 ± 13.0 kg; body fat: 12.0 ± 4.1%) and 10 Division I collegiate golfers (age: 20.0 ± 1.3 years; height: 178.1 ± 3.9 cm; weight: 76.4 ± 8.3 kg; body fat: 11.8 ± 2.6%) participated in the study. A counterbalanced within-group design was used. Subjects performed the respective baseball or golf protocol wearing either typical noncompressive (NC) or the experimental CG. Golfers participated in an assessment of driving distance and accuracy, as well as approach shot, chipping, and putting accuracy. Pitchers were assessed for fastball accuracy and velocity. In pitchers, there was a significant (p ≤ 0.05) improvement in fastball accuracy (NC: 0.30 ± 0.04 vs. CG: 0.21 ± 0.07 cm). There were no differences in pitching velocity. In golfers, there were significant (p ≤ 0.05) improvements in driving accuracy (NC: 86.7 ± 30.6 vs. CG: 68.9 ± 18.5 feet), as well as approach shot accuracy (NC: 26.6 ± 11.9 vs. CG: 22.1 ± 8.2 feet) and chipping accuracy (NC: 2.9 ± 0.6 vs. CG: 2.3 ± 0.6 inch). There was also a significant (p ≤ 0.05) increase in comfort for the golfers (NC: 3.7 ± 0.8 vs. CG: 4.5 ± 1.0). These results demonstrate that comfort and performance can be improved with the use of CGs in high-level athletes being most likely mediated by improved proprioceptive cues during upper-body movements.
Effect of black point on accuracy of LCD displays colorimetric characterization
NASA Astrophysics Data System (ADS)
Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan
2018-03-01
Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.
NASA Astrophysics Data System (ADS)
Lei, Yao; Bai, Yue; Xu, Zhijun
2018-06-01
This paper proposes an experimental approach for monitoring and inspection of the formation accuracy in ultra-precision grinding (UPG) with respect to the chatter vibration. Two factors related to the grinding progress, the grinding speed of grinding wheel and spindle, and the oil pressure of the hydrostatic bearing are taken into account to determining the accuracy. In the meantime, a mathematical model of the radius deviation caused by the micro vibration is also established and applied in the experiments. The results show that the accuracy is sensitive to the vibration and the forming accuracy is much improved with proper processing parameters. It is found that the accuracy of aspheric surface can be less than 4 μm when the grinding speed is 1400 r/min and the wheel speed is 100 r/min with the oil pressure being 1.1 MPa.
Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
ERIC Educational Resources Information Center
Ghoneim, Nahed Mohammed Mahmoud; Elghotmy, Heba Elsayed Abdelsalam
2015-01-01
The current study investigates the effect of a suggested multisensory phonics program on developing kindergarten pre-service teachers' EFL reading accuracy and phonemic awareness. A total of 40 fourth year kindergarten pre-service teachers, Faculty of Education, participated in the study that involved one group experimental design. Pre-post tests…
ERIC Educational Resources Information Center
Miyamoto, Karen A.
2005-01-01
A pretest-posttest experimental design was utilized to determine the efficacy of the Yuba Method on inaccurate elementary singers. Testing of pitch accuracy was analyzed using the Sona-Speech Model 3600 software program. Inaccurate singers (N=168) from a population of 320 fourth, fifth, and sixth grade students, were divided into three subgroups…
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Accuracy of Nonverbal Communication as Determinant of Interpersonal Expectancy Effects
ERIC Educational Resources Information Center
Zuckerman, Miron; And Others
1978-01-01
The person perception paradigm was used to address the effects of experimenters' ability to encode nonverbal cues and subjects' ability to decode nonverbal cues on magnitude of expectancy effects. Greater expectancy effects were obtained when experimenters were better encoders and subjects were better decoders of nonverbal cues. (Author)
2005-08-01
excellente justesse comparativement au F-Scan® pendant les essais sur le modèle de hanche. Les deux systèmes présentaient un certain degré de variation...in appendix C. The experimental design consisted of three steps (See Figure 1). Two were undertaken using a physical model for the shoulder in order...increase in accuracy error compared to Table 1 suggests that the current software for the XSENSOR® system is not designed to compensate for errors
Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.
Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A
2015-02-01
This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.
Fast depth decision for HEVC inter prediction based on spatial and temporal correlation
NASA Astrophysics Data System (ADS)
Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi
2016-07-01
High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.
Teaching identity matching of braille characters to beginning braille readers.
Toussaint, Karen A; Scheithauer, Mindy C; Tiger, Jeffrey H; Saunders, Kathryn J
2017-04-01
We taught three children with visual impairments to make tactile discriminations of the braille alphabet within a matching-to-sample format. That is, we presented participants with a braille character as a sample stimulus, and they selected the matching stimulus from a three-comparison array. In order to minimize participant errors, we initially arranged braille characters into training sets in which there was a maximum difference in the number of dots comprising the target and nontarget comparison stimuli. As participants mastered these discriminations, we increased the similarity between target and nontarget comparisons (i.e., an approximation of stimulus fading). All three participants' accuracy systematically increased following the introduction of this identity-matching procedure. © 2017 Society for the Experimental Analysis of Behavior.
Depth calibration of the Experimental Advanced Airborne Research Lidar, EAARL-B
Wright, C. Wayne; Kranenburg, Christine J.; Troche, Rodolfo J.; Mitchell, Richard W.; Nagle, David B.
2016-05-17
The resulting calibrated EAARL-B data were then analyzed and compared with the original reference dataset, the jet-ski-based dataset from the same Fort Lauderdale site, as well as the depth-accuracy requirements of the International Hydrographic Organization (IHO). We do not claim to meet all of the IHO requirements and standards. The IHO minimum depth-accuracy requirements were used as a reference only and we do not address the other IHO requirements such as “ Full Seafloor Search”. Our results show good agreement between the calibrated EAARL-B data and all reference datasets, with results that are within the 95 percent depth accuracy of the IHO Order 1 (a and b) depth-accuracy requirements.
Beta Testing of CFD Code for the Analysis of Combustion Systems
NASA Technical Reports Server (NTRS)
Yee, Emma; Wey, Thomas
2015-01-01
A preliminary version of OpenNCC was tested to assess its accuracy in generating steady-state temperature fields for combustion systems at atmospheric conditions using three-dimensional tetrahedral meshes. Meshes were generated from a CAD model of a single-element lean-direct injection combustor, and the latest version of OpenNCC was used to calculate combustor temperature fields. OpenNCC was shown to be capable of generating sustainable reacting flames using a tetrahedral mesh, and the subsequent results were compared to experimental results. While nonreacting flow results closely matched experimental results, a significant discrepancy was present between the code's reacting flow results and experimental results. When wide air circulation regions with high velocities were present in the model, this appeared to create inaccurately high temperature fields. Conversely, low recirculation velocities caused low temperature profiles. These observations will aid in future modification of OpenNCC reacting flow input parameters to improve the accuracy of calculated temperature fields.
Study regarding the spline interpolation accuracy of the experimentally acquired data
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Danisor, Alin; Tamas, Razvan
2016-12-01
Experimental data processing is an issue that must be solved in almost all the domains of science. In engineering we usually have a large amount of data and we try to extract the useful signal which is relevant for the phenomenon under investigation. The criteria used to consider some points more relevant then some others may take into consideration various conditions which may be either phenomenon dependent, or general. The paper presents some of the ideas and tests regarding the identification of the best set of criteria used to filter the initial set of points in order to extract a subset which best fits the approximated function. If the function has regions where it is either constant, or it has a slow variation, fewer discretization points may be used. This means to create a simpler solution to process the experimental data, keeping the accuracy in some fair good limits.
Self-consistent radiation-based simulation of electric arcs: II. Application to gas circuit breakers
NASA Astrophysics Data System (ADS)
Iordanidis, A. A.; Franck, C. M.
2008-07-01
An accurate and robust method for radiative heat transfer simulation for arc applications was presented in the previous paper (part I). In this paper a self-consistent mathematical model based on computational fluid dynamics and a rigorous radiative heat transfer model is described. The model is applied to simulate switching arcs in high voltage gas circuit breakers. The accuracy of the model is proven by comparison with experimental data for all arc modes. The ablation-controlled arc model is used to simulate high current PTFE arcs burning in cylindrical tubes. Model accuracy for the lower current arcs is evaluated using experimental data on the axially blown SF6 arc in steady state and arc resistance measurements close to current zero. The complete switching process with the arc going through all three phases is also simulated and compared with the experimental data from an industrial circuit breaker switching test.
A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)
Dülger, L. Canan; Kapucu, Sadettin
2016-01-01
This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.
Li, Yuexiang; Shen, Linlin
2018-02-11
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.
Diagrams benefit symbolic problem-solving.
Chu, Junyi; Rittle-Johnson, Bethany; Fyfe, Emily R
2017-06-01
The format of a mathematics problem often influences students' problem-solving performance. For example, providing diagrams in conjunction with story problems can benefit students' understanding, choice of strategy, and accuracy on story problems. However, it remains unclear whether providing diagrams in conjunction with symbolic equations can benefit problem-solving performance as well. We tested the impact of diagram presence on students' performance on algebra equation problems to determine whether diagrams increase problem-solving success. We also examined the influence of item- and student-level factors to test the robustness of the diagram effect. We worked with 61 seventh-grade students who had received 2 months of pre-algebra instruction. Students participated in an experimenter-led classroom session. Using a within-subjects design, students solved algebra problems in two matched formats (equation and equation-with-diagram). The presence of diagrams increased equation-solving accuracy and the use of informal strategies. This diagram benefit was independent of student ability and item complexity. The benefits of diagrams found previously for story problems generalized to symbolic problems. The findings are consistent with cognitive models of problem-solving and suggest that diagrams may be a useful additional representation of symbolic problems. © 2017 The British Psychological Society.
Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin
2016-01-01
This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.
Accuracy of Reaction Cross Section for Exotic Nuclei in Glauber Model Based on MCMC Diagnostics
NASA Astrophysics Data System (ADS)
Rueter, Keiti; Novikov, Ivan
2017-01-01
Parameters of a nuclear density distribution for an exotic nuclei with halo or skin structures can be determined from the experimentally measured reaction cross-section. In the presented work, to extract parameters such as nuclear size information for a halo and core, we compare experimental data on reaction cross-sections with values obtained using expressions of the Glauber Model. These calculations are performed using a Markov Chain Monte Carlo algorithm. We discuss the accuracy of the Monte Carlo approach and its dependence on k*, the power law turnover point in the discreet power spectrum of the random number sequence and on the lag-1 autocorrelation time of the random number sequence.
1981-08-01
scanner Scanivalve system with 288 ports Pressure transducers Druck , with ranges from 0-10 psia to 0-500 psia Accuracy + 0.06% BSL Thermocouple channels...Pressure scanner Scanivalve system with 120 trapping ports Pressure transducers Druck , with 0-25 psia to 0-100 psia Accuracy + 0.05% BSL Thermocouple...ob 7, o5?.q 1 3 3d I7 -I0Y 2.5.52 0 71 7, 9. 3 t2 b O0 b 2. ~ , 692.4 ",.333 1-69 -.579 71. 7.* Q4¥ 8, ** -. 19 06Jo-4 -Sie4 % -433.14.2 2.266A 01
Dørum, Erlend S; Kaufmann, Tobias; Alnæs, Dag; Andreassen, Ole A; Richard, Geneviève; Kolskår, Knut K; Nordvik, Jan Egil; Westlye, Lars T
2017-03-01
Age-related differences in cognitive agility vary greatly between individuals and cognitive functions. This heterogeneity is partly mirrored in individual differences in brain network connectivity as revealed using resting-state functional magnetic resonance imaging (fMRI), suggesting potential imaging biomarkers for age-related cognitive decline. However, although convenient in its simplicity, the resting state is essentially an unconstrained paradigm with minimal experimental control. Here, based on the conception that the magnitude and characteristics of age-related differences in brain connectivity is dependent on cognitive context and effort, we tested the hypothesis that experimentally increasing cognitive load boosts the sensitivity to age and changes the discriminative network configurations. To this end, we obtained fMRI data from younger (n=25, mean age 24.16±5.11) and older (n=22, mean age 65.09±7.53) healthy adults during rest and two load levels of continuous multiple object tracking (MOT). Brain network nodes and their time-series were estimated using independent component analysis (ICA) and dual regression, and the edges in the brain networks were defined as the regularized partial temporal correlations between each of the node pairs at the individual level. Using machine learning based on a cross-validated regularized linear discriminant analysis (rLDA) we attempted to classify groups and cognitive load from the full set of edge-wise functional connectivity indices. While group classification using resting-state data was highly above chance (approx. 70% accuracy), functional connectivity (FC) obtained during MOT strongly increased classification performance, with 82% accuracy for the young and 95% accuracy for the old group at the highest load level. Further, machine learning revealed stronger differentiation between rest and task in young compared to older individuals, supporting the notion of network dedifferentiation in cognitive aging. Task-modulation in edgewise FC was primarily observed between attention- and sensorimotor networks; with decreased negative correlations between attention- and default mode networks in older adults. These results demonstrate that the magnitude and configuration of age-related differences in brain functional connectivity are partly dependent on cognitive context and load, which emphasizes the importance of assessing brain connectivity differences across a range of cognitive contexts beyond the resting-state. Copyright © 2017 Elsevier Inc. All rights reserved.
Hochberger, William C; Axelrod, Jenna L; Sarapas, Casey; Shankman, Stewart A; Hill, S Kristian
2018-06-08
Research suggests that increasing delays in stimulus read-out can trigger declines in serial order recall accuracy due to increases in cognitive demand imposed by the delay; however, the exact neural mechanisms associated with this decline are unclear. Changes in neural resource allocation present as the ideal target and can easily be monitored by examining changes in the amplitude of an ERP component known as the P3. Changes in P3 amplitude secondary to exogenous pacing of stimulus read-out via increased target-to-target intervals (TTI) during recall could reflect decreased neural resource allocation due to increased cognitive demand. This shift in resource allocation could result in working memory storage decay and the declines in serial order accuracy described by prior research. In order to examine this potential effect, participants were administered a spatial serial order processing task, with the recall series consisting of a series of correct ("match") or incorrect ("non-match" or "oddball") stimuli. Moreover, the recall series included either a brief (500ms) or extended (2000ms) delay between stimuli. Results were significant for the presence of a P3 response to non-match stimuli for both experimental conditions, and attenuation of P3 amplitude secondary to the increase in target-to-target interval (TTI). These findings suggest that extending the delay between target recognition could increase cognitive demand and trigger a decrease in neural resource allocation that results in a decay of working memory stores.
Korucu, M Kemal; Kaplan, Özgür; Büyük, Osman; Güllü, M Kemal
2016-10-01
In this study, we investigate the usability of sound recognition for source separation of packaging wastes in reverse vending machines (RVMs). For this purpose, an experimental setup equipped with a sound recording mechanism was prepared. Packaging waste sounds generated by three physical impacts such as free falling, pneumatic hitting and hydraulic crushing were separately recorded using two different microphones. To classify the waste types and sizes based on sound features of the wastes, a support vector machine (SVM) and a hidden Markov model (HMM) based sound classification systems were developed. In the basic experimental setup in which only free falling impact type was considered, SVM and HMM systems provided 100% classification accuracy for both microphones. In the expanded experimental setup which includes all three impact types, material type classification accuracies were 96.5% for dynamic microphone and 97.7% for condenser microphone. When both the material type and the size of the wastes were classified, the accuracy was 88.6% for the microphones. The modeling studies indicated that hydraulic crushing impact type recordings were very noisy for an effective sound recognition application. In the detailed analysis of the recognition errors, it was observed that most of the errors occurred in the hitting impact type. According to the experimental results, it can be said that the proposed novel approach for the separation of packaging wastes could provide a high classification performance for RVMs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Flow field and friction factor of slush nitrogen in a horizontal circular pipe
NASA Astrophysics Data System (ADS)
Jin, Tao; Li, Yijian; Wu, Shuqin; Wei, Jianjian
2018-04-01
Slush nitrogen is the low-temperature two-phase fluid with solid nitrogen particle suspended in the liquid nitrogen. The flow characteristics of slush nitrogen in a horizontal pipe with the diameter of 16 mm have been experimentally and numerically investigated, under the operating conditions with the inlet flow velocity of 0-4 m/s and the solid volume fraction of 0-23%. The numerical results for pressure drop agree well with those of the experiments, with the relative errors of ±5%. The experimental and numerical results both show that the pressure drop of slush nitrogen is greater than that of subcooled liquid nitrogen and rises with the increasing particle concentration, under the working conditions in present work. Based on the simulation result, the flow pattern evolution of slush nitrogen with the increasing slush Reynolds number has been discussed, which can be classified into homogenous flow, heterogeneous flow and moving bed. The slush effective viscosity and the slush Reynolds number are calculated with Cheng & Law formula, which includes the effects of particle shape, size and type and has a high accuracy for high concentration slurries. Based on the slush Reynolds number, an experimental empirical correlation considering particle conditions for the friction factor of slush nitrogen flow is obtained.
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-01-01
An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-03-13
An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.
Experimental and QSAR study on the surface activities of alkyl imidazoline surfactants
NASA Astrophysics Data System (ADS)
Kong, Xiangjun; Qian, Chengduo; Fan, Weiyu; Liang, Zupei
2018-03-01
15 alkyl imidazoline surfactants with different structures were synthesized and their critical micelle concentration (CMC) and surface tension under the CMC (σcmc) in aqueous solution were measured at 298 K. 54 kinds of molecular structure descriptors were selected as independent variables and the quantitative structure-activity relationship (QSAR) between surface activities of alkyl imidazoline and molecular structure were built through the genetic function approximation (GFA) method. Experimental results showed that the maximum surface excess of alkyl imidazoline molecules at the gas-liquid interface increased and the area occupied by each surfactant molecule and the free energies of micellization ΔGm decreased with increasing carbon number (NC) of the hydrophobic chain or decreasing hydrophilicity of counterions, which resulted in a CMC and σcmc decrease, while the log CMC and NC had a linear relationship and a negative correlation. The GFA-QSAR model, which was generated by a training set composed of 13 kinds of alkyl imidazoline though GFA method regression analysis, was highly correlated with predicted values and experimental values of the CMC. The correlation coefficient R was 0.9991, which means high prediction accuracy. The prediction error of 2 kinds of alkyl imidazoline CMCs in the Validation Set that quantitatively analyzed the influence of the alkyl imidazoline molecular structure on the CMC was less than 4%.
Thrust Deduction in Contrarotating Propellers
1974-11-01
nuder gavc At = 0.056 Design CR propellers (Table 2) At = 0.029 Single Screw. Stromn-Tejsen 14 Very good agreement between the experimental and... design experimental points do not lie on the theoretical curve. This is believed to be due to either experimental test accuracy. or tile rudder effect, or...propellers. Con trarotating propellers operating at off- design loading and spacing as well as the contribution of a rudder were investigated. Theli
Automation of energy demand forecasting
NASA Astrophysics Data System (ADS)
Siddique, Sanzad
Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.
Debnath, Mithu; Iungo, G. Valerio; Ashton, Ryan; ...
2017-02-06
Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved withmore » good accuracy. Furthermore, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.« less
Quantitative hard x-ray phase contrast imaging of micropipes in SiC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohn, V. G.; Argunova, T. S.; Je, J. H., E-mail: jhje@postech.ac.kr
2013-12-15
Peculiarities of quantitative hard x-ray phase contrast imaging of micropipes in SiC are discussed. The micropipe is assumed as a hollow cylinder with an elliptical cross section. The major and minor diameters can be restored using the least square fitting procedure by comparing the experimental data, i.e. the profile across the micropipe axis, with those calculated based on phase contrast theory. It is shown that one projection image gives an information which does not allow a complete determination of the elliptical cross section, if an orientation of micropipe is not known. Another problem is a weak accuracy in estimating themore » diameters, partly because of using pink synchrotron radiation, which is necessary because a monochromatic beam intensity is not sufficient to reveal the weak contrast from a very small object. The general problems of accuracy in estimating the two diameters using the least square procedure are discussed. Two experimental examples are considered to demonstrate small as well as modest accuracies in estimating the diameters.« less
Preliminary GAOFEN-3 Insar dem Accuracy Analysis
NASA Astrophysics Data System (ADS)
Chen, Q.; Li, T.; Tang, X.; Gao, X.; Zhang, X.
2018-04-01
GF-3 satellite, the first C band and full-polarization SAR satellite of China with spatial resolution of 1 m, was successfully launched in August 2016. We analyze the error sources of GF-3 satellite in this paper, and provide the interferometric calibration model based on range function, Doppler shift equation and interferometric phase function, and interferometric parameters calibrated using the three-dimensional coordinates of ground control points. Then, we conduct the experimental two pairs of images in fine stripmap I mode covering Songshan of Henan Province and Tangshan of Hebei Province, respectively. The DEM data are assessed using SRTM DEM, ICESat-GLAS points, and ground control points database obtained using ZY-3 satellite to validate the accuracy of DEM elevation. The experimental results show that the accuracy of DEM extracted from GF-3 satellite SAR data can meet the requirements of topographic mapping in mountain and alpine regions at the scale of 1 : 50000 in China. Besides, it proves that GF-3 satellite has the potential of interferometry.
Methodological considerations for global analysis of cellular FLIM/FRET measurements
NASA Astrophysics Data System (ADS)
Adbul Rahim, Nur Aida; Pelet, Serge; Kamm, Roger D.; So, Peter T. C.
2012-02-01
Global algorithms can improve the analysis of fluorescence energy transfer (FRET) measurement based on fluorescence lifetime microscopy. However, global analysis of FRET data is also susceptible to experimental artifacts. This work examines several common artifacts and suggests remedial experimental protocols. Specifically, we examined the accuracy of different methods for instrument response extraction and propose an adaptive method based on the mean lifetime of fluorescent proteins. We further examined the effects of image segmentation and a priori constraints on the accuracy of lifetime extraction. Methods to test the applicability of global analysis on cellular data are proposed and demonstrated. The accuracy of global fitting degrades with lower photon count. By systematically tracking the effect of the minimum photon count on lifetime and FRET prefactors when carrying out global analysis, we demonstrate a correction procedure to recover the correct FRET parameters, allowing us to obtain protein interaction information even in dim cellular regions with photon counts as low as 100 per decay curve.
Vathsangam, Harshvardhan; Emken, Adar; Schroeder, E. Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S.
2011-01-01
This paper describes an experimental study in estimating energy expenditure from treadmill walking using a single hip-mounted triaxial inertial sensor comprised of a triaxial accelerometer and a triaxial gyroscope. Typical physical activity characterization using accelerometer generated counts suffers from two drawbacks - imprecison (due to proprietary counts) and incompleteness (due to incomplete movement description). We address these problems in the context of steady state walking by directly estimating energy expenditure with data from a hip-mounted inertial sensor. We represent the cyclic nature of walking with a Fourier transform of sensor streams and show how one can map this representation to energy expenditure (as measured by V O2 consumption, mL/min) using three regression techniques - Least Squares Regression (LSR), Bayesian Linear Regression (BLR) and Gaussian Process Regression (GPR). We perform a comparative analysis of the accuracy of sensor streams in predicting energy expenditure (measured by RMS prediction accuracy). Triaxial information is more accurate than uniaxial information. LSR based approaches are prone to outlier sensitivity and overfitting. Gyroscopic information showed equivalent if not better prediction accuracy as compared to accelerometers. Combining accelerometer and gyroscopic information provided better accuracy than using either sensor alone. We also analyze the best algorithmic approach among linear and nonlinear methods as measured by RMS prediction accuracy and run time. Nonlinear regression methods showed better prediction accuracy but required an order of magnitude of run time. This paper emphasizes the role of probabilistic techniques in conjunction with joint modeling of triaxial accelerations and rotational rates to improve energy expenditure prediction for steady-state treadmill walking. PMID:21690001
Increasing Deception Detection Accuracy with Strategic Questioning
ERIC Educational Resources Information Center
Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.
2010-01-01
One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta-Schubert, N.; Reyes, M.A.
2007-11-15
The predictive accuracy of the generalized liquid drop model (GLDM) formula for alpha-decay half-lives has been investigated in a detailed manner and a variant of the formula with improved coefficients is proposed. The method employs the experimental alpha half-lives of the well-known alpha standards to obtain the coefficients of the analytical formula using the experimental Q{sub {alpha}} values (the DSR-E formula), as well as the finite range droplet model (FRDM) derived Q{sub {alpha}} values (the FRDM-FRDM formula). The predictive accuracy of these formulae was checked against the experimental alpha half-lives of an independent set of nuclei (TEST) that span approximatelymore » the same Z, A region as the standards and possess reliable alpha spectroscopic data, and were found to yield good results for the DSR-E formula but not for the FRDM-FRDM formula. The two formulae were used to obtain the alpha half-lives of superheavy elements (SHE) and heavy nuclides where the relative accuracy was found to be markedly improved for the FRDM-FRDM formula, which corroborates the appropriateness of the FRDM masses and the GLDM prescription for high Z, A nuclides. Further improvement resulted, especially for the FRDM-FRDM formula, after a simple linear optimization over the calculated and experimental half-lives of TEST was used to re-calculate the half-lives of the SHE and heavy nuclides. The advantage of this optimization was that it required no re-calculation of the coefficients of the basic DSR-E or FRDM-FRDM formulae. The half-lives for 324 medium-mass to superheavy alpha decaying nuclides, calculated using these formulae and the comparison with experimental half-lives, are presented.« less
Analytical approach on the stiffness of MR fluid filled spring
NASA Astrophysics Data System (ADS)
Sikulskyi, Stanislav; Kim, Daewon
2017-04-01
A solid mechanical spring generally exhibits uniform stiffness. This paper studies a mechanical spring filled with magnetorheological (MR) fluid to achieve controllable stiffness. The hollow spring filled with MR fluid is subjected to a controlled magnetic field in order to change the viscosity of the MR fluid and thereby to change the overall stiffness of the spring. MR fluid is considered as a Bingham viscoplastic linear material in the mathematical model. The goal of this research is to study the feasibility of such spring system by analytically computing the effects of MR fluid on overall spring stiffness. For this purpose, spring mechanics and MR fluid behavior are studied to increase the accuracy of the analysis. Numerical simulations are also performed to generate some assumptions, which simplify calculations in the analytical part. The accuracy of the present approach is validated by comparing the analytical results to previously known experimental results. Overall stiffness variations of the spring are also discussed for different spring designs.
Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.
Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa
2015-09-01
Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. © The Author(s) 2015.
Fuzzy regression modeling for tool performance prediction and degradation detection.
Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L
2010-10-01
In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.
NASA Astrophysics Data System (ADS)
Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.
2014-10-01
In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.
Study on Octahedral Spherical Hohlraum
NASA Astrophysics Data System (ADS)
Lan, Ke; Liu, Jie; Huo, Wenyi; Li, Zhichao; Yang, Dong; Li, Sanwei; Ren, Guoli; Chen, Yaohua; Jiang, Shaoen; He, Xian-Tu; Zhang, Weiyan
2015-11-01
In this talk, we report our recent study on octahedral spherical hohlraum which has six laser entrance holes (LEHs). First, our study shows that the octahedral hohlraums have robust high symmetry during the capsule implosion at hohlraum-to- capsule radius ratio larger than 3.7 and have potential superiority on low backscatter without supplementary technology. Second, we study the laser arrangement and constraints of the octahedral hohlraums and give their laser arrangement design for ignition facility. Third, we propose a novel octahedral hohlraum with LEH shields and cylindrical LEHs, in order to increase the laser coupling efficiency and improve the capsule symmetry and to mitigate the influence of the wall blowoff on laser transport. Fourth, we study the sensitivity of capsule symmetry inside the octahedral hohlraums to laser power balance, pointing accuracy, deviations from the optimal position and target fabrication accuracy, and compare the results with that of tradiational cylinders and rugby hohlraums. Finally, we present our recent experimental studies on the octahedral hohlraums on SGIII prototype laser facility.
Game theoretic approach for cooperative feature extraction in camera networks
NASA Astrophysics Data System (ADS)
Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco
2016-07-01
Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.
Smoking modulates language lateralization in a sex-specific way.
Hahn, Constanze; Pogun, Sakire; Güntürkün, Onur
2010-12-01
Smoking affects a widespread network of neuronal functions by altering the properties of acetylcholinergic transmission. Recent studies show that nicotine consumption affects ascending auditory pathways and alters auditory attention, particularly in men. Here we show that smoking affects language lateralization in a sex-specific way. We assessed brain asymmetries of 90 healthy, right-handed participants using a classic consonant-vowel syllable dichotic listening paradigm in a 2×3 experimental design with sex (male, female) and smoking status (non-smoker, light smoker, heavy smoker) as between-subject factors. Our results revealed that male smokers had a significantly less lateralized response pattern compared to the other groups due to a decreased response rate of their right ear. This finding suggests a group-specific impairment of the speech dominant left hemisphere. In addition, decreased overall response accuracy was observed in male smokers compared to the other experimental groups. Similar adverse effects of smoking were not detected in women. Further, a significant negative correlation was detected between the severity of nicotine dependency and response accuracy in male but not in female smokers. Taken together, these results show that smoking modulates functional brain lateralization significantly and in a sexually dimorphic manner. Given that some psychiatric disorders have been associated with altered brain asymmetries and increased smoking prevalence, nicotinergic effects need to be specifically investigated in this context in future studies. Copyright © 2010 Elsevier Ltd. All rights reserved.
Evaluating DFT for Transition Metals and Binaries: Developing the V/DM-17 Test Set
NASA Astrophysics Data System (ADS)
Decolvenaere, Elizabeth; Mattsson, Ann
We have developed the V-DM/17 test set to evaluate the experimental accuracy of DFT calculations of transition metals. When simulation and experiment disagree, the disconnect in length-scales and temperatures makes determining ``who is right'' difficult. However, methods to evaluate the experimental accuracy of functionals in the context of solid-state materials science, especially for transition metals, is lacking. As DFT undergoes a shift from a descriptive to a predictive tool, these issues of verification are becoming increasingly important. With undertakings like the Materials Project leading the way in high-throughput predictions and discoveries, the development of a one-size-fits-most approach to verification is critical. Our test set evaluates 26 transition metal elements and 80 transition metal alloys across three physical observables: lattice constants, elastic coefficients, and formation energy of alloys. Whether or not the formation energy can be reproduced measures whether the relevant physics are captured in a calculation. This is especially important question in transition metals, where active d-electrons can thwart commonly used techniques. In testing the V/DM-17 test set, we offer new views into the performance of existing functionals. Sandia National Labs is a multi-mission laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Svelle, Stian; Tuma, Christian; Rozanska, Xavier; Kerber, Torsten; Sauer, Joachim
2009-01-21
The methylation of ethene, propene, and t-2-butene by methanol over the acidic microporous H-ZSM-5 catalyst has been investigated by a range of computational methods. Density functional theory (DFT) with periodic boundary conditions (PBE functional) fails to describe the experimentally determined decrease of apparent energy barriers with the alkene size due to inadequate description of dispersion forces. Adding a damped dispersion term expressed as a parametrized sum over atom pair C(6) contributions leads to uniformly underestimated barriers due to self-interaction errors. A hybrid MP2:DFT scheme is presented that combines MP2 energy calculations on a series of cluster models of increasing size with periodic DFT calculations, which allows extrapolation to the periodic MP2 limit. Additionally, errors caused by the use of finite basis sets, contributions of higher order correlation effects, zero-point vibrational energy, and thermal contributions to the enthalpy were evaluated and added to the "periodic" MP2 estimate. This multistep approach leads to enthalpy barriers at 623 K of 104, 77, and 48 kJ/mol for ethene, propene, and t-2-butene, respectively, which deviate from the experimentally measured values by 0, +13, and +8 kJ/mol. Hence, enthalpy barriers can be calculated with near chemical accuracy, which constitutes significant progress in the quantum chemical modeling of reactions in heterogeneous catalysis in general and microporous zeolites in particular.
Constrained Analysis of Fluorescence Anisotropy Decay:Application to Experimental Protein Dynamics
Feinstein, Efraim; Deikus, Gintaras; Rusinova, Elena; Rachofsky, Edward L.; Ross, J. B. Alexander; Laws, William R.
2003-01-01
Hydrodynamic properties as well as structural dynamics of proteins can be investigated by the well-established experimental method of fluorescence anisotropy decay. Successful use of this method depends on determination of the correct kinetic model, the extent of cross-correlation between parameters in the fitting function, and differences between the timescales of the depolarizing motions and the fluorophore's fluorescence lifetime. We have tested the utility of an independently measured steady-state anisotropy value as a constraint during data analysis to reduce parameter cross correlation and to increase the timescales over which anisotropy decay parameters can be recovered accurately for two calcium-binding proteins. Mutant rat F102W parvalbumin was used as a model system because its single tryptophan residue exhibits monoexponential fluorescence intensity and anisotropy decay kinetics. Cod parvalbumin, a protein with a single tryptophan residue that exhibits multiexponential fluorescence decay kinetics, was also examined as a more complex model. Anisotropy decays were measured for both proteins as a function of solution viscosity to vary hydrodynamic parameters. The use of the steady-state anisotropy as a constraint significantly improved the precision and accuracy of recovered parameters for both proteins, particularly for viscosities at which the protein's rotational correlation time was much longer than the fluorescence lifetime. Thus, basic hydrodynamic properties of larger biomolecules can now be determined with more precision and accuracy by fluorescence anisotropy decay. PMID:12524313
Experimental evaluation of radiosity for room sound-field prediction.
Hodgson, Murray; Nosal, Eva-Marie
2006-08-01
An acoustical radiosity model was evaluated for how it performs in predicting real room sound fields. This was done by comparing radiosity predictions with experimental results for three existing rooms--a squash court, a classroom, and an office. Radiosity predictions were also compared with those by ray tracing--a "reference" prediction model--for both specular and diffuse surface reflection. Comparisons were made for detailed and discretized echograms, sound-decay curves, sound-propagation curves, and the variations with frequency of four room-acoustical parameters--EDT, RT, D50, and C80. In general, radiosity and diffuse ray tracing gave very similar predictions. Predictions by specular ray tracing were often very different. Radiosity agreed well with experiment in some cases, less well in others. Definitive conclusions regarding the accuracy with which the rooms were modeled, or the accuracy of the radiosity approach, were difficult to draw. The results suggest that radiosity predicts room sound fields with some accuracy, at least as well as diffuse ray tracing and, in general, better than specular ray tracing. The predictions of detailed echograms are less accurate, those of derived room-acoustical parameters more accurate. The results underline the need to develop experimental methods for accurately characterizing the absorptive and reflective characteristics of room surfaces, possible including phase.
Rydzy, M; Deslauriers, R; Smith, I C; Saunders, J K
1990-08-01
A systematic study was performed to optimize the accuracy of kinetic parameters derived from magnetization transfer measurements. Three techniques were investigated: time-dependent saturation transfer (TDST), saturation recovery (SRS), and inversion recovery (IRS). In the last two methods, one of the resonances undergoing exchange is saturated throughout the experiment. The three techniques were compared with respect to the accuracy of the kinetic parameters derived from experiments performed in a given, fixed, amount of time. Stochastic simulation of magnetization transfer experiments was performed to optimize experimental design. General formulas for the relative accuracies of the unidirectional rate constant (k) were derived for each of the three experimental methods. It was calculated that for k values between 0.1 and 1.0 s-1, T1 values between 1 and 10 s, and relaxation delays appropriate for the creatine kinase reaction, the SRS method yields more accurate values of k than does the IRS method. The TDST method is more accurate than the SRS method for reactions where T1 is long and k is large, within the range of k and T1 values examined. Experimental verification of the method was carried out on a solution in which the forward (PCr----ATP) rate constant (kf) of the creatine kinase reaction was measured.
Razmara, Jafar; Zaboli, Mohammad Hassan; Hassankhani, Hadi
2016-11-01
Falls play a critical role in older people's life as it is an important source of morbidity and mortality in elders. In this article, elders fall risk is predicted based on a physiological profile approach using a multilayer neural network with back-propagation learning algorithm. The personal physiological profile of 200 elders was collected through a questionnaire and used as the experimental data for learning and testing the neural network. The profile contains a series of simple factors putting elders at risk for falls such as vision abilities, muscle forces, and some other daily activities and grouped into two sets: psychological factors and public factors. The experimental data were investigated to select factors with high impact using principal component analysis. The experimental results show an accuracy of ≈90 percent and ≈87.5 percent for fall prediction among the psychological and public factors, respectively. Furthermore, combining these two datasets yield an accuracy of ≈91 percent that is better than the accuracy of single datasets. The proposed method suggests a set of valid and reliable measurements that can be employed in a range of health care systems and physical therapy to distinguish people who are at risk for falls.
Experiments on robot-assisted navigated drilling and milling of bones for pedicle screw placement.
Ortmaier, T; Weiss, H; Döbele, S; Schreiber, U
2006-12-01
This article presents experimental results for robot-assisted navigated drilling and milling for pedicle screw placement. The preliminary study was carried out in order to gain first insights into positioning accuracies and machining forces during hands-on robotic spine surgery. Additionally, the results formed the basis for the development of a new robot for surgery. A simplified anatomical model is used to derive the accuracy requirements. The experimental set-up consists of a navigation system and an impedance-controlled light-weight robot holding the surgical instrument. The navigation system is used to position the surgical instrument and to compensate for pose errors during machining. Holes are drilled in artificial bone and bovine spine. A quantitative comparison of the drill-hole diameters was achieved using a computer. The interaction forces and pose errors are discussed with respect to the chosen machining technology and control parameters. Within the technological boundaries of the experimental set-up, it is shown that the accuracy requirements can be met and that milling is superior to drilling. It is expected that robot assisted navigated surgery helps to improve the reliability of surgical procedures. Further experiments are necessary to take the whole workflow into account. Copyright 2006 John Wiley & Sons, Ltd.
Görgen, Kai; Hebart, Martin N; Allefeld, Carsten; Haynes, John-Dylan
2017-12-27
Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rahimi, Muhammad; Zhang, Lawrence Jun
2016-01-01
This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…
Analysis of the relationship between errors in manufacture of slot connections and gear drive noises
NASA Technical Reports Server (NTRS)
Bodronosov, M. K.
1973-01-01
On the basis of experimental research, an analysis was carried out of the effect of certain errors in manufacture of straight-barrel slots on the noise characteristics of gear drives. In carrying out the experiments, the gear crowns of the test wheels were held immovable, and only the geometric dimensions of the slots and the mutual locations of the individual elements were varied. The investigation of the effect of each factor was carried out under otherwise equal conditions, on 34:56 cog ratio gear pairs (m = 2mm), made of 40 C steel, with a gear crown accuracy of 7 X, machining fineness 7, at a speed v = 7.1 m/sec. The number of slots was 6. The clearance in slot pairs in dimension D, equal to 0.015, 0.05, 0.08 and 0.110 mm, was obtained by change in the outer diameter of the spindle by means of polishing. The results of the tests of the experimental wheels showed that their noise level increases with increase in clearance.
The HADDOCK2.2 Web Server: User-Friendly Integrative Modeling of Biomolecular Complexes.
van Zundert, G C P; Rodrigues, J P G L M; Trellet, M; Schmitz, C; Kastritis, P L; Karaca, E; Melquiond, A S J; van Dijk, M; de Vries, S J; Bonvin, A M J J
2016-02-22
The prediction of the quaternary structure of biomolecular macromolecules is of paramount importance for fundamental understanding of cellular processes and drug design. In the era of integrative structural biology, one way of increasing the accuracy of modeling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. This has been at the core of our information-driven docking approach HADDOCK. We present here the updated version 2.2 of the HADDOCK portal, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface. With well over 6000 registered users and 108,000 jobs served, an increasing fraction of which on grid resources, we hope that this timely upgrade will help the community to solve important biological questions and further advance the field. The HADDOCK2.2 Web server is freely accessible to non-profit users at http://haddock.science.uu.nl/services/HADDOCK2.2. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Post processing for offline Chinese handwritten character string recognition
NASA Astrophysics Data System (ADS)
Wang, YanWei; Ding, XiaoQing; Liu, ChangSong
2012-01-01
Offline Chinese handwritten character string recognition is one of the most important research fields in pattern recognition. Due to the free writing style, large variability in character shapes and different geometric characteristics, Chinese handwritten character string recognition is a challenging problem to deal with. However, among the current methods over-segmentation and merging method which integrates geometric information, character recognition information and contextual information, shows a promising result. It is found experimentally that a large part of errors are segmentation error and mainly occur around non-Chinese characters. In a Chinese character string, there are not only wide characters namely Chinese characters, but also narrow characters like digits and letters of the alphabet. The segmentation error is mainly caused by uniform geometric model imposed on all segmented candidate characters. To solve this problem, post processing is employed to improve recognition accuracy of narrow characters. On one hand, multi-geometric models are established for wide characters and narrow characters respectively. Under multi-geometric models narrow characters are not prone to be merged. On the other hand, top rank recognition results of candidate paths are integrated to boost final recognition of narrow characters. The post processing method is investigated on two datasets, in total 1405 handwritten address strings. The wide character recognition accuracy has been improved lightly and narrow character recognition accuracy has been increased up by 10.41% and 10.03% respectively. It indicates that the post processing method is effective to improve recognition accuracy of narrow characters.
NASA Astrophysics Data System (ADS)
Avanesov, G. A.; Bessonov, R. V.; Kurkina, A. N.; Nikitin, A. V.; Sazonov, V. V.
2018-01-01
The BOKZ-M60 star sensor (Unit for Measuring Star Coordinates) is intended for determining the parameters of the orientation of the axes of the intrinsic coordinate system relative to the axes of the inertial system by observations of the regions of the stellar sky. It is convenient to characterize an error of the single determination of the orientation of the intrinsic coordinate system of the sensor by the vector of an infinitesimal turn of this system relative to its found position. Full-scale ground-based tests have shown that, for a resting sensor the root-mean-square values of the components of this vector along the axes of the intrinsic coordinate system lying in the plane of the sensor CCD matrix are less than 2″ and the component along the axis perpendicular to the matrix plane is characterized by the root-mean-square value of 15″. The joint processing of one-stage readings of several sensors installed on the same platform allows us to improve the indicated accuracy characteristics. In this paper, estimates of the accuracy of systems from BOKZ-M60 with two and four sensors performed from measurements carried out during the normal operation of these sensors on the Resurs-P satellite are given. Processing the measurements of the sensor system allowed us to increase the accuracy of determining the each of their orientations and to study random and systematic errors in these measurements.
Lobchuk, Michelle; Halas, Gayle; West, Christina; Harder, Nicole; Tursunova, Zulfiya; Ramraj, Chantal
2016-11-01
Stressed family carers engage in health-risk behaviours that can lead to chronic illness. Innovative strategies are required to bolster empathic dialogue skills that impact nursing student confidence and sensitivity in meeting carers' wellness needs. To report on the development and evaluation of a promising empathy-related video-feedback intervention and its impact on student empathic accuracy on carer health risk behaviours. A pilot quasi-experimental design study with eight pairs of 3rd year undergraduate nursing students and carers. Students participated in perspective-taking instructional and practice sessions, and a 10-minute video-recorded dialogue with carers followed by a video-tagging task. Quantitative and qualitative approaches helped us to evaluate the recruitment protocol, capture participant responses to the intervention and study tools, and develop a tool to assess student empathic accuracy. The instructional and practice sessions increased student self-awareness of biases and interest in learning empathy by video-tagging feedback. Carers felt that students were 'non-judgmental', inquisitive, and helped them to 'gain new insights' that fostered ownership to change their health-risk behaviour. There was substantial Fleiss Kappa agreement among four raters across five dyads and 67 tagged instances. In general, students and carers evaluated the intervention favourably. The results suggest areas of improvement to the recruitment protocol, perspective-taking instructions, video-tagging task, and empathic accuracy tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
Localization accuracy of sphere fiducials in computed tomography images
NASA Astrophysics Data System (ADS)
Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias
2014-03-01
In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagase, F.; Ishikawa, J.; Kurata, M.
2013-07-01
Estimation of the accident progress and status inside the pressure vessels (RPV) and primary containment vessels (PCV) is required for appropriate conductance of decommissioning in the Fukushima-Daiichi NPP. For that, it is necessary to obtain additional experimental data and revised models for the estimation using computer codes with increased accuracies. The Japan Atomic Energy Agency (JAEA) has selected phenomena to be reviewed and developed, considering previously obtained information, conditions specific to the Fukushima-Daiichi NPP accident, and recent progress of experimental and analytical technologies. As a result, research and development items have been picked up in terms of thermal-hydraulic behavior inmore » the RPV and PCV, progression of fuel bundle degradation, failure of the lower head of RPV, and analysis of the accident. This paper introduces the selected phenomena to be reviewed and developed, research plans and recent results from the JAEA's corresponding research programs. (authors)« less
NASA Astrophysics Data System (ADS)
Yang, Da-Wei; Zhao, Xiu-Ying; Zhang, Geng; Li, Qiang-Guo; Wu, Si-Zhu
2016-05-01
Molecule dynamics (MD) simulation, a molecular-level method, was applied to predict the damping properties of AO-60/polyacrylate rubber (AO-60/ACM) composites before experimental measures were performed. MD simulation results revealed that two types of hydrogen bond, namely, type A (AO-60) -OH•••O=C- (ACM), type B (AO-60) - OH•••O=C- (AO-60) were formed. Then, the AO-60/ACM composites were fabricated and tested to verify the accuracy of the MD simulation through dynamic mechanical thermal analysis (DMTA). DMTA results showed that the introduction of AO-60 could remarkably improve the damping properties of the composites, including the increase of glass transition temperature (Tg) alongside with the loss factor (tan δ), also indicating the AO-60/ACM(98/100) had the best damping performance amongst the composites which verified by the experimental.
A database to enable discovery and design of piezoelectric materials
de Jong, Maarten; Chen, Wei; Geerlings, Henry; Asta, Mark; Persson, Kristin Aslaug
2015-01-01
Piezoelectric materials are used in numerous applications requiring a coupling between electrical fields and mechanical strain. Despite the technological importance of this class of materials, for only a small fraction of all inorganic compounds which display compatible crystallographic symmetry, has piezoelectricity been characterized experimentally or computationally. In this work we employ first-principles calculations based on density functional perturbation theory to compute the piezoelectric tensors for nearly a thousand compounds, thereby increasing the available data for this property by more than an order of magnitude. The results are compared to select experimental data to establish the accuracy of the calculated properties. The details of the calculations are also presented, along with a description of the format of the database developed to make these computational results publicly available. In addition, the ways in which the database can be accessed and applied in materials development efforts are described. PMID:26451252
Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E
2017-02-13
Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.
Superelastic SMA U-shaped dampers with self-centering functions
NASA Astrophysics Data System (ADS)
Wang, Bin; Zhu, Songye
2018-05-01
As high-performance metallic materials, shape memory alloys (SMAs) have been investigated increasingly by the earthquake engineering community in recent years, because of their remarkable self-centering (SC) and energy-dissipating capabilities. This paper systematically presents an experimental study on a novel superelastic SMA U-shaped damper (SMA-UD) with SC function under cyclic loading. The mechanical properties, including strength, SC ability, and energy-dissipating capability with varying loading amplitudes and strain rates are evaluated. Test results show that excellent and stable flag-shaped hysteresis loops are exhibited in multiple loading cycles. Strain rate has a negligible effect on the cyclic behavior of the SMA-UD within the dynamic frequency range of typical interest in earthquake engineering. Furthermore, a numerical investigation is performed to understand the mechanical behavior of the SMA-UD. The numerical model is calibrated against the experimental results with reasonable accuracy. Then, the stress–strain states with different phase transformations are also discussed.
Neural-network quantum state tomography
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe
2018-05-01
The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.
A Numerical/Experimental Study on the Impact and CAI Behaviour of Glass Reinforced Compsite Plates
NASA Astrophysics Data System (ADS)
Perillo, Giovanni; Jørgensen, Jens K.; Cristiano, Roberta; Riccio, Aniello
2018-04-01
This paper focuses on the development of an advance numerical model specifically for simulating low velocity impact events and related stiffness reduction on composite structures. The model is suitable for low cost thick composite structures like wind turbine blade and maritime vessels. The model consist of a combination of inter and intra laminar models. The intra-laminar model present a combination of Puck and Hashin failure theories for the evaluation of the fibre and matrix failure. The inter-laminar damage is instead simulated by Cohesive Zone Method based on energy approach. Basic material properties, easily measurable according to standardized tests, are required. The model has been used to simulate impact and compression after impact tests. Experimental tests have been carried out on thick E-Glass/Epoxy composite commonly used in the wind turbine industry. The clustering effect as well as the consequence of the impact energy have been experimentally tested. The accuracy of numerical model has been verified against experimental data showing a very good accuracy of the model.
An accuracy measurement method for star trackers based on direct astronomic observation
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-01-01
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412
An accuracy measurement method for star trackers based on direct astronomic observation.
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-03-07
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.
alpha-decay half-lives and Q{sub a}lpha values of superheavy nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Jianmin; Graduate University of Chinese Academy of Sciences, Beijing 100049; School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000
2010-06-15
The alpha-decay half-lives of recently synthesized superheavy nuclei (SHN) are investigated by employing a unified fission model (UFM) where a new method to calculate the assault frequency of alpha emission is used. The excellent agreement with the experimental data indicates the UFM is a useful tool to investigate these alpha decays. It is found that the alpha-decay half-lives become more and more insensitive to the Q{sub a}lpha values as the atomic number increases on the whole, which is favorable for us to predict the half-lives of SHN. In addition, a formula is proposed to compute the Q{sub a}lpha values formore » the nuclei with Z>=92 and N>=140 with a good accuracy, according to which the long-lived SHN should be neutron rich. Several weeks ago, two isotopes of a new element with atomic number Z=117 were synthesized and their alpha-decay chains have been observed. The Q{sub a}lpha formula is found to work well for these nuclei, confirming its predictive power. The experimental half-lives are well reproduced by employing the UFM with the experimental Q{sub a}lpha values. This fact that the experimental half-lives are compatible with experimental Q{sub a}lpha values supports the synthesis of a new element 117 and the experimental measurements to a certain extent.« less
A model-based scatter artifacts correction for cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Wei; Zhu, Jun; Wang, Luyao
2016-04-15
Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less
Cawston-Grant, Brie; Morrison, Hali; Menon, Geetha; Sloboda, Ron S
2017-05-01
Model-based dose calculation algorithms have recently been incorporated into brachytherapy treatment planning systems, and their introduction requires critical evaluation before clinical implementation. Here, we present an experimental evaluation of Oncentra ® Brachy Advanced Collapsed-cone Engine (ACE) for a multichannel vaginal cylinder (MCVC) applicator using radiochromic film. A uniform dose of 500 cGy was specified to the surface of the MCVC using the TG-43 dose formalism under two conditions: (a) with only the central channel loaded or (b) only the peripheral channels loaded. Film measurements were made at the applicator surface and compared to the doses calculated using TG-43, standard accuracy ACE (sACE), and high accuracy ACE (hACE). When the central channel of the applicator was used, the film measurements showed a dose increase of (11 ± 8)% (k = 2) above the two outer grooves on the applicator surface. This increase in dose was confirmed with the hACE calculations, but was not confirmed with the sACE calculations at the applicator surface. When the peripheral channels were used, a periodic azimuthal variation in measured dose was observed around the applicator. The sACE and hACE calculations confirmed this variation and agreed within 1% of each other at the applicator surface. Additionally for the film measurements with the central channel used, a baseline dose variation of (10 ± 4)% (k = 2) of the mean dose was observed azimuthally around the applicator surface, which can be explained by offset source positioning in the central channel. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Lyons, R A; Rodgers, S E; Thomas, S; Bailey, R; Brunt, H; Thayer, D; Bidmead, J; Evans, B A; Harold, P; Hooper, M; Snooks, H
2016-05-23
There is no evidence to date on whether an intervention alerting people to high levels of pollution is effective in reducing health service utilisation. We evaluated alert accuracy and the effect of a targeted personal air pollution alert system, airAware, on emergency hospital admissions, emergency department attendances, general practitioner contacts and prescribed medications. Quasi-experimental study describing accuracy of alerts compared with pollution triggers; and comparing relative changes in healthcare utilisation in the intervention group to those who did not sign-up. Participants were people diagnosed with asthma, chronic obstructive pulmonary disease (COPD) or coronary heart disease, resident in an industrial area of south Wales and registered patients at 1 of 4 general practices. Longitudinal anonymised record linked data were modelled for participants and non-participants, adjusting for differences between groups. During the 2-year intervention period alerts were correctly issued on 208 of 248 occasions; sensitivity was 83.9% (95% CI 78.8% to 87.9%) and specificity 99.5% (95% CI 99.3% to 99.6%). The intervention was associated with a 4-fold increase in admissions for respiratory conditions (incidence rate ratio (IRR) 3.97; 95% CI 1.59 to 9.93) and a near doubling of emergency department attendance (IRR=1.89; 95% CI 1.34 to 2.68). The intervention was associated with increased emergency admissions for respiratory conditions. While findings may be context specific, evidence from this evaluation questions the benefits of implementing near real-time personal pollution alert systems for high-risk individuals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Speed/accuracy trade-off between the habitual and the goal-directed processes.
Keramati, Mehdi; Dezfouli, Amir; Piray, Payam
2011-05-01
Instrumental responses are hypothesized to be of two kinds: habitual and goal-directed, mediated by the sensorimotor and the associative cortico-basal ganglia circuits, respectively. The existence of the two heterogeneous associative learning mechanisms can be hypothesized to arise from the comparative advantages that they have at different stages of learning. In this paper, we assume that the goal-directed system is behaviourally flexible, but slow in choice selection. The habitual system, in contrast, is fast in responding, but inflexible in adapting its behavioural strategy to new conditions. Based on these assumptions and using the computational theory of reinforcement learning, we propose a normative model for arbitration between the two processes that makes an approximately optimal balance between search-time and accuracy in decision making. Behaviourally, the model can explain experimental evidence on behavioural sensitivity to outcome at the early stages of learning, but insensitivity at the later stages. It also explains that when two choices with equal incentive values are available concurrently, the behaviour remains outcome-sensitive, even after extensive training. Moreover, the model can explain choice reaction time variations during the course of learning, as well as the experimental observation that as the number of choices increases, the reaction time also increases. Neurobiologically, by assuming that phasic and tonic activities of midbrain dopamine neurons carry the reward prediction error and the average reward signals used by the model, respectively, the model predicts that whereas phasic dopamine indirectly affects behaviour through reinforcing stimulus-response associations, tonic dopamine can directly affect behaviour through manipulating the competition between the habitual and the goal-directed systems and thus, affect reaction time.
Resolution limits of ultrafast ultrasound localization microscopy
NASA Astrophysics Data System (ADS)
Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael
2015-11-01
As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.
Methods for recalibration of mass spectrometry data
Tolmachev, Aleksey V [Richland, WA; Smith, Richard D [Richland, WA
2009-03-03
Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.
Calorimetric method for determination of {sup 51}Cr neutrino source activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veretenkin, E. P., E-mail: veretenk@inr.ru; Gavrin, V. N.; Danshin, S. N.
Experimental study of nonstandard neutrino properties using high-intensity artificial neutrino sources requires the activity of the sources to be determined with high accuracy. In the BEST project, a calorimetric system for measurement of the activity of high-intensity (a few MCi) neutrino sources based on {sup 51}Cr with an accuracy of 0.5–1% is created. In the paper, the main factors affecting the accuracy of determining the neutrino source activity are discussed. The calorimetric system design and the calibration results using a thermal simulator of the source are presented.
Competency-based assessment in surgeon-performed head and neck ultrasonography: A validity study.
Todsen, Tobias; Melchiors, Jacob; Charabi, Birgitte; Henriksen, Birthe; Ringsted, Charlotte; Konge, Lars; von Buchwald, Christian
2018-06-01
Head and neck ultrasonography (HNUS) increasingly is used as a point-of-care diagnostic tool by otolaryngologists. However, ultrasonography (US) is a very operator-dependent image modality. Hence, this study aimed to explore the diagnostic accuracy of surgeon-performed HNUS and to establish validity evidence for an objective structured assessment of ultrasound skills (OSAUS) used for competency-based assessment. A prospective experimental study. Six otolaryngologists and 11 US novices were included in a standardized test setup for which they had to perform focused HNUS of eight patients suspected for different head and neck lesions. Their diagnostic accuracy was calculated based on the US reports, and two blinded raters assessed the video-recorded US performance using the OSAUS scale. The otolaryngologists obtained a high diagnostic accuracy on 88% (range 63%-100%) compared to the US novices on 38% (range 0-63%); P < 0.001. The OSAUS score demonstrated good inter-case reliability (0.85) and inter-rater reliability (0.76), and significant discrimination between otolaryngologist and US novices; P < 0.001. A strong correlation between the OSAUS score and the diagnostic accuracy was found (Spearman's ρ, 0.85; P < P 0.001), and a pass/fail score was established at 2.8. Strong validity evidence supported the use of the OSAUS scale to assess HNUS competence with good reliability, significant discrimination between US competence levels, and a strong correlation of assessment score to diagnostic accuracy. An OSAUS pass/fail score was established and could be used for competence-based assessment in surgeon-performed HNUS. NA. Laryngoscope, 128:1346-1352, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Zheng, Yu; Chen, Xiong; Zhou, Mei; Wang, Meng-jun; Wang, Jin-hai; Li, Gang; Cui, Jun
2015-10-01
It is important to real-timely monitor and control the temperature of cell physiological solution in patch clamp experiments, which can eliminate the uncertainty due to temperature and improve the measurement accuracy. This paper studies the influence of different ions at different concentrations in the physiological solution on precision of a temperature model by using near infrared spectroscopy and chemometrics method. Firstly, we prepared twelve sample solutions respectively with the solutes of CaCl2, KCl and NaCl at four kinds of concentrations, and collected the spectra of different solutions at the setting temperature range 20-40 degrees C, the range of the spectra is 9 615-5 714 cm(-1). Then we divided the spectra of each solution at different temperatures into two parts (a training set and a prediction set) by three methods. Interval partial least squares method was used to select an effective wavelength range and develop calibration models between the spectra in the selected range and temperature velues. The experimental results show that RMSEP of CaCl2 solution with 0.25 g x mL(-1) is maximum, the result of the three tests are 0.386 3, 0.303 7 and 0.337 2 degrees C, RMSEP of NaCl with 0.005 g x mL(-1) solution is minimum, the result of the three tests are 0.220 8, 0.155 3 and 0.145 2 degrees C. The experimental results indicate that Ca2+ has the greatest influence on the accuracy of the temperature model of the cell physiological solution, then K+, and Na+ has the least influence. And with the ionic concentration increasing, the model accuracy decreases. Therefore; when we build the temperature model of cell physiological solution, it is necessary to change the proportion of the three kinds of main ions in cell physiological solution reasonably in order to correct the effects of different ionic concentrations in physiological solution and improve the accuracy of temperature measurements by near infrared spectroscopy.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Accuracy of force and center of pressure measures of the Wii Balance Board.
Bartlett, Harrison L; Ting, Lena H; Bingham, Jeffrey T
2014-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ± 9.1N, and of COP location within ± 4.1mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. Published by Elsevier B.V.
Accuracy of force and center of pressure measures of the Wii Balance Board
Bartlett, Harrison L.; Ting, Lena H.; Bingham, Jeffrey T.
2013-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ±9.1 N, and of COP location within ±4.1 mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5 mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. PMID:23910725
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Predicting the Accuracy of Protein–Ligand Docking on Homology Models
BORDOGNA, ANNALISA; PANDINI, ALESSANDRO; BONATI, LAURA
2011-01-01
Ligand–protein docking is increasingly used in Drug Discovery. The initial limitations imposed by a reduced availability of target protein structures have been overcome by the use of theoretical models, especially those derived by homology modeling techniques. While this greatly extended the use of docking simulations, it also introduced the need for general and robust criteria to estimate the reliability of docking results given the model quality. To this end, a large-scale experiment was performed on a diverse set including experimental structures and homology models for a group of representative ligand–protein complexes. A wide spectrum of model quality was sampled using templates at different evolutionary distances and different strategies for target–template alignment and modeling. The obtained models were scored by a selection of the most used model quality indices. The binding geometries were generated using AutoDock, one of the most common docking programs. An important result of this study is that indeed quantitative and robust correlations exist between the accuracy of docking results and the model quality, especially in the binding site. Moreover, state-of-the-art indices for model quality assessment are already an effective tool for an a priori prediction of the accuracy of docking experiments in the context of groups of proteins with conserved structural characteristics. PMID:20607693
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.
Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben
2018-02-22
This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
A class of nonideal solutions. 2: Application to experimental data
NASA Technical Reports Server (NTRS)
Zeleznik, F. J.; Donovan, L. F.
1983-01-01
Functions for the representation of the thermodynamic properties of nonideal solutions were applied to the experimental data for several highly nonideal solutions. The test solutions were selected to cover both electrolyte behavior. The results imply that the functions are fully capable of representing the experimental data within their accuracy over the whole composition range and demonstrate that many nonideal solutions can be regarded as members of the defined class of nonideal solutions.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
Enthalpies of Formation of Hydrazine and Its Derivatives.
Dorofeeva, Olga V; Ryzhova, Oxana N; Suchkova, Taisiya A
2017-07-20
Enthalpies of formation, Δ f H 298 ° , in both the gas and condensed phase, and enthalpies of sublimation or vaporization have been estimated for hydrazine, NH 2 NH 2 , and its 36 various derivatives using quantum chemical calculations. The composite G4 method has been used along with isodesmic reaction schemes to derive a set of self-consistent high-accuracy gas-phase enthalpies of formation. To estimate the enthalpies of sublimation and vaporization with reasonable accuracy (5-20 kJ/mol), the method of molecular electrostatic potential (MEP) has been used. The value of Δ f H 298 ° (NH 2 NH 2 ,g) = 97.0 ± 3.0 kJ/mol was determined from 75 isogyric reactions involving about 50 reference species; for most of these species, the accurate Δ f H 298 ° (g) values are available in Active Thermochemical Tables (ATcT). The calculated value is in excellent agreement with the reported results of the most accurate models based on coupled cluster theory (97.3 kJ/mol, the average of six calculations). Thus, the difference between the values predicted by high-level theoretical calculations and the experimental value of Δ f H 298 ° (NH 2 NH 2 ,g) = 95.55 ± 0.19 kJ/mol recommended in the ATcT and other comprehensive reference sources is sufficiently large and requires further investigation. Different hydrazine derivatives have been also considered in this work. For some of them, both the enthalpy of formation in the condensed phase and the enthalpy of sublimation or vaporization are available; for other compounds, experimental data for only one of these properties exist. Evidence of accuracy of experimental data for the first group of compounds was provided by the agreement with theoretical Δ f H 298 ° (g) value. The unknown property for the second group of compounds was predicted using the MEP model. This paper presents a systematic comparison of experimentally determined enthalpies of formation and enthalpies of sublimation or vaporization with the results of calculations. Because of relatively large uncertainty in the estimated enthalpies of sublimation, it was not always possible to evaluate the accuracy of the experimental values; however, this model allowed us to detect large errors in the experimental data, as in the case of 5,5'-hydrazinebistetrazole. The enthalpies of formation and enthalpies of sublimation or vaporization have been predicted for the first time for ten hydrazine derivatives with no experimental data. A recommended set of self-consistent experimental and calculated gas-phase enthalpies of formation of hydrazine derivatives can be used as reference Δ f H 298 ° (g) values to predict the enthalpies of formation of various hydrazines by means of isodesmic reactions.
Verification of experimental dynamic strength methods with atomistic ramp-release simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Alexander P.; Brown, Justin L.; Lim, Hojun
Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressuremore » gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. Furthermore, these simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.« less
Verification of experimental dynamic strength methods with atomistic ramp-release simulations
NASA Astrophysics Data System (ADS)
Moore, Alexander P.; Brown, Justin L.; Lim, Hojun; Lane, J. Matthew D.
2018-05-01
Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressure gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. These simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.
Verification of experimental dynamic strength methods with atomistic ramp-release simulations
Moore, Alexander P.; Brown, Justin L.; Lim, Hojun; ...
2018-05-04
Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressuremore » gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. Furthermore, these simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.« less
Gas-phase conformations of 2-methyl-1,3-dithiolane investigated by microwave spectroscopy
NASA Astrophysics Data System (ADS)
Van, Vinh; Stahl, Wolfgang; Schwell, Martin; Nguyen, Ha Vinh Lam
2018-03-01
The conformational analysis of 2-methyl-1,3-dithiolane using quantum chemical calculations at some levels of theory yielded only one stable conformer with envelope geometry. However, other levels of theory indicated two envelope conformers. Analysis of the microwave spectrum recorded using two molecular jet Fourier transform microwave spectrometers covering the frequency range from 2 to 40 GHz confirms that only one conformer exists under jet conditions. The experimental spectrum was reproduced using a rigid-rotor model with centrifugal distortion correction within the measurement accuracy of 1.5 kHz, and molecular parameters were determined with very high accuracy. The gas phase structure of the title molecule is compared with the structures of other related molecules studied under the same experimental conditions.
NASA Astrophysics Data System (ADS)
Ji, Hongzhu; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan
2018-06-01
An iterative method, based on a derived inverse relationship between atmospheric backscatter coefficient and aerosol lidar ratio, is proposed to invert the lidar ratio profile and aerosol extinction coefficient. The feasibility of this method is investigated theoretically and experimentally. Simulation results show the inversion accuracy of aerosol optical properties for iterative method can be improved in the near-surface aerosol layer and the optical thick layer. Experimentally, as a result of the reduced insufficiency error and incoherence error, the aerosol optical properties with higher accuracy can be obtained in the near-surface region and the region of numerical derivative distortion. In addition, the particle component can be distinguished roughly based on this improved lidar ratio profile.
Physics of self-aligned assembly at room temperature
NASA Astrophysics Data System (ADS)
Dubey, V.; Beyne, E.; Derakhshandeh, J.; De Wolf, I.
2018-01-01
Self-aligned assembly, making use of capillary forces, is considered as an alternative to active alignment during thermo-compression bonding of Si chips in the 3D heterogeneous integration process. Various process parameters affect the alignment accuracy of the chip over the patterned binding site on a substrate/carrier wafer. This paper discusses the chip motion due to wetting and capillary force using a transient coupled physics model for the two regimes (that is, wetting regime and damped oscillatory regime) in the temporal domain. Using the transient model, the effect of the volume of the liquid and the placement accuracy of the chip on the alignment force is studied. The capillary time (that is, the time it takes for the chip to reach its mean position) for the chip is directly proportional to the placement offset and inversely proportional to the viscosity. The time constant of the harmonic oscillations is directly proportional to the gap between the chips due to the volume of the fluid. The predicted behavior from transient simulations is next experimentally validated and it is confirmed that the liquid volume and the initial placement affect the final alignment accuracy of the top chip on the bottom substrate. With statistical experimental data, we demonstrate an alignment accuracy reaching <1 μm.
NASA Astrophysics Data System (ADS)
Channumsin, Sittiporn; Ceriotti, Matteo; Radice, Gianmarco; Watson, Ian
2017-09-01
Multilayer insulation (MLI) is a recently-discovered type of debris originating from delamination of aging spacecraft; it is mostly detected near the geosynchronous orbit (GEO). Observation data indicates that these objects are characterised by high reflectivity, high area-to-mass ratio (HAMR), fast rotation, high sensitivity to perturbations (especially solar radiation pressure) and change of area-to-mass ratio (AMR) over time. As a result, traditional models (e.g. cannonball) are unsuitable to represent and predict this debris' orbital evolution. Previous work by the authors effectively modelled the flexible debris by means of multibody dynamics to improve the prediction accuracy. The orbit evolution with the flexible model resulted significantly different from using the rigid model. This paper aims to present a methodology to determine the dynamic properties of thin membranes with the purpose to validate the deformation characteristics of the flexible model. A high-vacuum chamber (10-4 mbar) to significantly decrease air friction, inside which a thin membrane is hinged at one end but free at the other provides the experimental setup. A free motion test is used to determine the damping characteristics and natural frequency of the thin membrane via logarithmic decrement and frequency response. The membrane can swing freely in the chamber and the motion is tracked by a static, optical camera, and a Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. Then, the effect of solar radiation pressure on the thin membrane is investigated: a high power spotlight (500-2000 W) is used to illuminate the sample and any displacement of the membrane is measured by means of a high-resolution laser sensor. Analytic methods from the natural frequency response and Finite Element Analysis (FEA) including multibody simulations of both experimental setups are used for the validation of the flexible model by comparing the experimental results of amplitude decay, natural frequencies and deformation. The experimental results show good agreement with both analytical results and finite element methods.
Validation of the human odor span task: effects of nicotine.
MacQueen, David A; Drobes, David J
2017-10-01
Amongst non-smokers, nicotine generally enhances performance on tasks of attention, with limited effect on working memory. In contrast, nicotine has been shown to produce robust enhancements of working memory in non-humans. To address this gap, the present study investigated the effects of nicotine on the performance of non-smokers on a cognitive battery which included a working memory task reverse-translated from use with rodents (the odor span task, OST). Nicotine has been reported to enhance OST performance in rats and the present study assessed whether this effect generalizes to human performance. Thirty non-smokers were tested on three occasions after consuming either placebo, 2 mg, or 4 mg nicotine gum. On each occasion, participants completed a battery of clinical and experimental tasks of working memory and attention. Nicotine was associated with dose-dependent enhancements in sustained attention, as evidenced by increased hit accuracy on the rapid visual information processing (RVIP) task. However, nicotine failed to produce main effects on OST performance or on alternative measures of working memory (digit span, spatial span, letter-number sequencing, 2-back) or attention (digits forward, 0-back). Interestingly, enhancement of RVIP performance occurred concomitant to significant reductions in self-reported attention/concentration. Human OST performance was significantly related to N-back performance, and as in rodents, OST accuracy declined with increasing memory load. Given the similarity of human and rodent OST performance under baseline conditions and the strong association between OST and visual 0-back accuracy, the OST may be particular useful in the study of conditions characterized by inattention.
Lung imaging in rodents using dual energy micro-CT
NASA Astrophysics Data System (ADS)
Badea, C. T.; Guo, X.; Clark, D.; Johnston, S. M.; Marshall, C.; Piantadosi, C.
2012-03-01
Dual energy CT imaging is expected to play a major role in the diagnostic arena as it provides material decomposition on an elemental basis. The purpose of this work is to investigate the use of dual energy micro-CT for the estimation of vascular, tissue, and air fractions in rodent lungs using a post-reconstruction three-material decomposition method. We have tested our method using both simulations and experimental work. Using simulations, we have estimated the accuracy limits of the decomposition for realistic micro-CT noise levels. Next, we performed experiments involving ex vivo lung imaging in which intact lungs were carefully removed from the thorax, were injected with an iodine-based contrast agent and inflated with air at different volume levels. Finally, we performed in vivo imaging studies in (n=5) C57BL/6 mice using fast prospective respiratory gating in endinspiration and end-expiration for three different levels of positive end-expiratory pressure (PEEP). Prior to imaging, mice were injected with a liposomal blood pool contrast agent. The mean accuracy values were for Air (95.5%), Blood (96%), and Tissue (92.4%). The absolute accuracy in determining all fraction materials was 94.6%. The minimum difference that we could detect in material fractions was 15%. As expected, an increase in PEEP levels for the living mouse resulted in statistically significant increases in air fractions at end-expiration, but no significant changes in end-inspiration. Our method has applicability in preclinical pulmonary studies where various physiological changes can occur as a result of genetic changes, lung disease, or drug effects.
Effects of laser power density and initial grain size in laser shock punching of pure copper foil
NASA Astrophysics Data System (ADS)
Zheng, Chao; Zhang, Xiu; Zhang, Yiliang; Ji, Zhong; Luan, Yiguo; Song, Libin
2018-06-01
The effects of laser power density and initial grain size on forming quality of holes in laser shock punching process were investigated in the present study. Three different initial grain sizes as well as three levels of laser power densities were provided, and then laser shock punching experiments of T2 copper foil were conducted. Based upon the experimental results, the characteristics of shape accuracy, fracture surface morphology and microstructures of punched holes were examined. It is revealed that the initial grain size has a noticeable effect on forming quality of holes punched by laser shock. The shape accuracy of punched holes degrades with the increase of grain size. As the laser power density is enhanced, the shape accuracy can be improved except for the case in which the ratio of foil thickness to initial grain size is approximately equal to 1. Compared with the fracture surface morphology in the quasistatic loading conditions, the fracture surface after laser shock can be divided into three zones including rollover, shearing and burr. The distribution of the above three zones strongly relates with the initial grain size. When the laser power density is enhanced, the shearing depth is not increased, but even diminishes in some cases. There is no obvious change of microstructures with the enhancement of laser power density. However, while the initial grain size is close to the foil thickness, single-crystal shear deformation may occur, suggesting that the ratio of foil thickness to initial grain size has an important impact on deformation behavior of metal foil in laser shock punching process.
A Novel Recommendation System to Match College Events and Groups to Students
NASA Astrophysics Data System (ADS)
Qazanfari, K.; Youssef, A.; Keane, K.; Nelson, J.
2017-10-01
With the recent increase in data online, discovering meaningful opportunities can be time-consuming and complicated for many individuals. To overcome this data overload challenge, we present a novel text-content-based recommender system as a valuable tool to predict user interests. To that end, we develop a specific procedure to create user models and item feature-vectors, where items are described in free text. The user model is generated by soliciting from a user a few keywords and expanding those keywords into a list of weighted near-synonyms. The item feature-vectors are generated from the textual descriptions of the items, using modified tf-idf values of the users’ keywords and their near-synonyms. Once the users are modeled and the items are abstracted into feature vectors, the system returns the maximum-similarity items as recommendations to that user. Our experimental evaluation shows that our method of creating the user models and item feature-vectors resulted in higher precision and accuracy in comparison to well-known feature-vector-generating methods like Glove and Word2Vec. It also shows that stemming and the use of a modified version of tf-idf increase the accuracy and precision by 2% and 3%, respectively, compared to non-stemming and the standard tf-idf definition. Moreover, the evaluation results show that updating the user model from usage histories improves the precision and accuracy of the system. This recommender system has been developed as part of the Agnes application, which runs on iOS and Android platforms and is accessible through the Agnes website.
Lentz, Jennifer J; He, Yuan; Townsend, James T
2014-01-01
This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)-most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy.
Lentz, Jennifer J.; He, Yuan; Townsend, James T.
2014-01-01
This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)—most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy. PMID:25202254
Techniques for improving the accuracy of cyrogenic temperature measurement in ground test programs
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Fabik, Richard H.
1993-01-01
The performance of a sensor is often evaluated by determining to what degree of accuracy a measurement can be made using this sensor. The absolute accuracy of a sensor is an important parameter considered when choosing the type of sensor to use in research experiments. Tests were performed to improve the accuracy of cryogenic temperature measurements by calibration of the temperature sensors when installed in their experimental operating environment. The calibration information was then used to correct for temperature sensor measurement errors by adjusting the data acquisition system software. This paper describes a method to improve the accuracy of cryogenic temperature measurements using corrections in the data acquisition system software such that the uncertainty of an individual temperature sensor is improved from plus or minus 0.90 deg R to plus or minus 0.20 deg R over a specified range.
Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation
Barbero, Sergio; Thibos, Larry N.
2007-01-01
Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302
Liu, Yanqiu; Lu, Huijuan; Yan, Ke; Xia, Haixia; An, Chunlin
2016-01-01
Embedding cost-sensitive factors into the classifiers increases the classification stability and reduces the classification costs for classifying high-scale, redundant, and imbalanced datasets, such as the gene expression data. In this study, we extend our previous work, that is, Dissimilar ELM (D-ELM), by introducing misclassification costs into the classifier. We name the proposed algorithm as the cost-sensitive D-ELM (CS-D-ELM). Furthermore, we embed rejection cost into the CS-D-ELM to increase the classification stability of the proposed algorithm. Experimental results show that the rejection cost embedded CS-D-ELM algorithm effectively reduces the average and overall cost of the classification process, while the classification accuracy still remains competitive. The proposed method can be extended to classification problems of other redundant and imbalanced data.
A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures
2014-01-01
Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954
Experimental studies of electroweak physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etzion, E.
1997-09-01
Some experimental new Electroweak physics results measured at the LEP/SLD and the TEVATRON are discussed. The excellent accuracy achieved by the experiments still yield no significant evidence for deviation from the Standard Model predictions, or signal to physics beyond the Standard Model. The Higgs particle still has not been discovered and a low bound is given to its mass.
Sources of uncertainty in estimating stream solute export from headwater catchments at three sites
Ruth D. Yanai; Naoko Tokuchi; John L. Campbell; Mark B. Green; Eiji Matsuzaki; Stephanie N. Laseter; Cindi L. Brown; Amey S. Bailey; Pilar Lyons; Carrie R. Levine; Donald C. Buso; Gene E. Likens; Jennifer D. Knoepp; Keitaro Fukushima
2015-01-01
Uncertainty in the estimation of hydrologic export of solutes has never been fully evaluated at the scale of a small-watershed ecosystem. We used data from the Gomadansan Experimental Forest, Japan, Hubbard Brook Experimental Forest, USA, and Coweeta Hydrologic Laboratory, USA, to evaluate many sources of uncertainty, including the precision and accuracy of...
ERIC Educational Resources Information Center
Klein, P.; Hirth, M.; Gröber, S.; Kuhn, J.; Müller, A.
2014-01-01
Smartphones and tablets are used as experimental tools and for quantitative measurements in two traditional laboratory experiments for undergraduate physics courses. The Doppler effect is analyzed and the speed of sound is determined with an accuracy of about 5% using ultrasonic frequency and two smartphones, which serve as rotating sound emitter…
ERIC Educational Resources Information Center
Ferreira, P. Costa; Simão, A. M. Veiga; da Silva, A. Lopes
2017-01-01
This study aimed to understand how children reflect about learning, report their regulation of learning activity, and develop their performance in contemporary English as a Foreign Language instructional settings. A quasi-experimental design was used with one experimental group working in a self-regulated learning computer-supported instructional…
NASA Astrophysics Data System (ADS)
Larin, Kirill V.
Approximately 14 million people in the USA and more than 140 million people worldwide suffer from diabetes mellitus. The current glucose sensing technique involves a finger puncture several times a day to obtain a droplet of blood for analysis. There have been enormous efforts by many scientific groups and companies to quantify glucose concentration noninvasively using different optical techniques. However, these techniques face limitations associated with low sensitivity, accuracy, and insufficient specificity of glucose concentrations over a physiological range. Optical coherence tomography (OCT), a new technology, is being applied for noninvasive imaging in tissues with high resolution. OCT utilizes sensitive detection of photons coherently scattered from tissue. The high resolution of this technique allows for exceptionally accurate measurement of tissue scattering from a specific layer of skin compared with other optical techniques and, therefore, may provide noninvasive and continuous monitoring of blood glucose concentration with high accuracy. In this dissertation work I experimentally and theoretically investigate feasibility of noninvasive, real-time, sensitive, and specific monitoring of blood glucose concentration using an OCT-based biosensor. The studies were performed in scattering media with stable optical properties (aqueous suspensions of polystyrene microspheres and milk), animals (New Zealand white rabbits and Yucatan micropigs), and normal subjects (during oral glucose tolerance tests). The results of these studies demonstrated: (1) capability of the OCT technique to detect changes in scattering coefficient with the accuracy of about 1.5%; (2) a sharp and linear decrease of the OCT signal slope in the dermis with the increase of blood glucose concentration; (3) the change in the OCT signal slope measured during bolus glucose injection experiments (characterized by a sharp increase of blood glucose concentration) is higher than that measured in the glucose clamping experiments (characterized by slow, controlled increase of the blood glucose concentration); and (4) the accuracy of glucose concentration monitoring may substantially be improved if optimal dimensions of the probed skin area are used. The results suggest that high-resolution OCT technique has a potential for noninvasive, accurate, and continuous glucose monitoring with high sensitivity.
Effects of Night Work, Sleep Loss and Time on Task on Simulated Threat Detection Performance
Basner, Mathias; Rubinstein, Joshua; Fomberstein, Kenneth M.; Coble, Matthew C.; Ecker, Adrian; Avinash, Deepa; Dinges, David F.
2008-01-01
Study Objectives: To investigate the effects of night work and sleep loss on a simulated luggage screening task (SLST) that mimicked the x-ray system used by airport luggage screeners. Design: We developed more than 5,800 unique simulated x-ray images of luggage organized into 31 stimulus sets of 200 bags each. 25% of each set contained either a gun or a knife with low or high target difficulty. The 200-bag stimuli sets were then run on software that simulates an x-ray screening system (SLST). Signal detection analysis was used to obtain measures of hit rate (HR), false alarm rate (FAR), threat detection accuracy (A′), and response bias (B″D). Setting: Experimental laboratory study Participants: 24 healthy nonprofessional volunteers (13 women, mean age ± SD = 29.9 ± 6.5 years). Interventions: Subjects performed the SLST every 2 h during a 5-day period that included a 35 h period of wakefulness that extended to night work and then another day work period after the night without sleep. Results: Threat detection accuracy A′ decreased significantly (P < 0.001) while FAR increased significantly (P < 0.001) during night work, while both A′ (P = 0.001) and HR decreased (P = 0.008) during day work following sleep loss. There were prominent time-on-task effects on response bias B″D (P = 0.002) and response latency (P = 0.004), but accuracy A′ was unaffected. Both HR and FAR increased significantly with increasing study duration (both P < 0.001), while response latency decreased significantly (P < 0.001). Conclusions: This study provides the first systematic evidence that night work and sleep loss adversely affect the accuracy of detecting complex real world objects among high levels of background clutter. If the results can be replicated in professional screeners and real work environments, fatigue in luggage screening personnel may pose a threat for air traffic safety unless countermeasures for fatigue are deployed. Citation: Basner M; Rubinstein J; Fomberstein KM; Coble MC; Avinash D; Dinges DF. Effects of Night Work, Sleep Loss and Time on Task on Simulated Threat Detection Performance. SLEEP 2008;31(9):1251-1259. PMID:18788650
Vu, Tuong-Van; Finkenauer, Catrin; Huizinga, Mariette; Novin, Sheida; Krabbendam, Lydia
2017-01-01
This study investigated whether individualism and collectivism (IC) at country, individual, and situational level influence how quickly and accurately people can infer mental states (i.e. theory of mind, or ToM), indexed by accuracy and reaction time in a ToM task. We hypothesized that collectivism (having an interdependent self and valuing group concerns), compared to individualism (having an independent self and valuing personal concerns), is associated with greater accuracy and speed in recognizing and understanding the thoughts and feelings of others. Students (N = 207) from individualism-representative (the Netherlands) and collectivism-representative (Vietnam) countries (Country IC) answered an individualism-collectivism questionnaire (Individual IC) and were randomly assigned to an individualism-primed, collectivism-primed, or no-prime task (Situational IC) before performing a ToM task. The data showed vast differences between the Dutch and Vietnamese groups that might not be attributable to experimental manipulation. Therefore, we analyzed the data for the groups separately and found that Individual IC did not predict ToM accuracy or reaction time performance. Regarding Situational IC, when primed with individualism, the accuracy performance of Vietnamese participants in affective ToM trials decreased compared to when primed with collectivism and when no prime was used. However, an interesting pattern emerged: Dutch participants were least accurate in affective ToM trials, while Vietnamese participants were quickest in affective ToM trials. Our research also highlights a dilemma faced by cross-cultural researchers who use hard-to-reach populations but face the challenge of disentangling experimental effects from biases that might emerge due to an interaction between cultural differences and experimental settings. We propose suggestions for overcoming such challenges.
Bragdon, Charles R; Malchau, Henrik; Yuan, Xunhua; Perinchief, Rebecca; Kärrholm, Johan; Börlin, Niclas; Estok, Daniel M; Harris, William H
2002-07-01
The purpose of this study was to develop and test a phantom model based on actual total hip replacement (THR) components to simulate the true penetration of the femoral head resulting from polyethylene wear. This model was used to study both the accuracy and the precision of radiostereometric analysis, RSA, in measuring wear. We also used this model to evaluate optimum tantalum bead configuration for this particular cup design when used in a clinical setting. A physical model of a total hip replacement (a phantom) was constructed which could simulate progressive, three-dimensional (3-D) penetration of the femoral head into the polyethylene component of a THR. Using a coordinate measuring machine (CMM) the positioning of the femoral head using the phantom was measured to be accurate to within 7 microm. The accuracy and precision of an RSA analysis system was determined from five repeat examinations of the phantom using various experimental set-ups of the phantom. The accuracy of the radiostereometric analysis, in this optimal experimental set-up studied was 33 microm for the medial direction, 22 microm for the superior direction, 86 microm for the posterior direction and 55 microm for the resultant 3-D vector length. The corresponding precision at the 95% confidence interval of the test results for repositioning the phantom five times, measured 8.4 microm for the medial direction, 5.5 microm for the superior direction, 16.0 microm for the posterior direction, and 13.5 microm for the resultant 3-D vector length. This in vitro model is proposed as a useful tool for developing a standard for the evaluation of radiostereometric and other radiographic methods used to measure in vivo wear.
Finkenauer, Catrin; Huizinga, Mariette; Novin, Sheida; Krabbendam, Lydia
2017-01-01
This study investigated whether individualism and collectivism (IC) at country, individual, and situational level influence how quickly and accurately people can infer mental states (i.e. theory of mind, or ToM), indexed by accuracy and reaction time in a ToM task. We hypothesized that collectivism (having an interdependent self and valuing group concerns), compared to individualism (having an independent self and valuing personal concerns), is associated with greater accuracy and speed in recognizing and understanding the thoughts and feelings of others. Students (N = 207) from individualism-representative (the Netherlands) and collectivism-representative (Vietnam) countries (Country IC) answered an individualism-collectivism questionnaire (Individual IC) and were randomly assigned to an individualism-primed, collectivism-primed, or no-prime task (Situational IC) before performing a ToM task. The data showed vast differences between the Dutch and Vietnamese groups that might not be attributable to experimental manipulation. Therefore, we analyzed the data for the groups separately and found that Individual IC did not predict ToM accuracy or reaction time performance. Regarding Situational IC, when primed with individualism, the accuracy performance of Vietnamese participants in affective ToM trials decreased compared to when primed with collectivism and when no prime was used. However, an interesting pattern emerged: Dutch participants were least accurate in affective ToM trials, while Vietnamese participants were quickest in affective ToM trials. Our research also highlights a dilemma faced by cross-cultural researchers who use hard-to-reach populations but face the challenge of disentangling experimental effects from biases that might emerge due to an interaction between cultural differences and experimental settings. We propose suggestions for overcoming such challenges. PMID:28832602
Fast image matching algorithm based on projection characteristics
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
NASA Astrophysics Data System (ADS)
Matveev, D. T.; Chepurnov, B. D.
Test results obtained during 1980-1981 at the Zvenigorod station are presented for the Intercosmos laser rangefinder which was modified in various ways: e.g., optical components of the laser were replaced, and the mechanical Q-switch of the laser resonator was replaced by a phototropic Q-switch. Improved reliability was noted, and the ranging accuracy was increased by 1.5-2 times. It is concluded that the Zvenigorod tests indicate that the first-generation Intercosmos laser rangefinder can be effectively modernized at other Intercosmos tracking stations.
Location Estimation of Urban Images Based on Geographical Neighborhoods
NASA Astrophysics Data System (ADS)
Huang, Jie; Lo, Sio-Long
2018-04-01
Estimating the location of an image is a challenging computer vision problem, and the recent decade has witnessed increasing research efforts towards the solution of this problem. In this paper, we propose a new approach to the location estimation of images taken in urban environments. Experiments are conducted to quantitatively compare the estimation accuracy of our approach, against three representative approaches in the existing literature, using a recently published dataset of over 150 thousand Google Street View images and 259 user uploaded images as queries. According to the experimental results, our approach outperforms three baseline approaches and shows its robustness across different distance thresholds.
Effect of the initial configuration for user-object reputation systems
NASA Astrophysics Data System (ADS)
Wu, Ying-Ying; Guo, Qiang; Liu, Jian-Guo; Zhang, Yi-Cheng
2018-07-01
Identifying the user reputation accurately is significant for the online social systems. For different fair rating parameter q, by changing the parameter values α and β of the beta probability distribution (RBPD) for ranking online user reputation, we investigate the effect of the initial configuration of the RBPD method for the online user ranking performance. Experimental results for the Netflix and MovieLens data sets show that when the parameter q equals to 0.8 and 0.9, the accuracy value AUC would increase about 4.5% and 3.5% for the Netflix data set, while the AUC value increases about 1.5% for the MovieLens data set when the parameter q is 0.9. Furthermore, we investigate the evolution characteristics of the AUC value for different α and β, and find that as the rating records increase, the AUC value increases about 0.2 and 0.16 for the Netflix and MovieLens data sets, indicating that online users' reputations will increase as they rate more and more objects.
NASA Astrophysics Data System (ADS)
Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He
2016-06-01
An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2014YQ060537), and the National Basic Research Program, China (Grant No. 2013CB632803).
Drift correction of the dissolved signal in single particle ICPMS.
Cornelis, Geert; Rauch, Sebastien
2016-07-01
A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.
Dopaminergic stimulation enhances confidence and accuracy in seeing rapidly presented words.
Lou, Hans C; Skewes, Joshua C; Thomsen, Kristine Rømer; Overgaard, Morten; Lau, Hakwan C; Mouridsen, Kim; Roepstorff, Andreas
2011-02-23
Liberal acceptance, overconfidence, and increased activity of the neurotransmitter dopamine have been proposed to account for abnormal sensory experiences, for instance, hallucinations in schizophrenia. In normal subjects, increased sensory experience in Yoga Nidra meditation is linked to striatal dopamine release. We therefore hypothesize that the neurotransmitter dopamine may function as a regulator of subjective confidence of visual perception in the normal brain. Although much is known about the effect of stimulation by neurotransmitters on cognitive functions, their effect on subjective confidence of perception has never been recorded experimentally before. In a controlled study of 24 normal, healthy female university students with the dopamine agonist pergolide given orally, we show that dopaminergic activation increases confidence in seeing rapidly presented words. It also improves performance in a forced-choice word recognition task. These results demonstrate neurotransmitter regulation of subjective conscious experience of perception and provide evidence for a crucial role of dopamine.
Effects of normal aging on memory for multiple contextual features.
Gagnon, Sylvain; Soulard, Kathleen; Brasgold, Melissa; Kreller, Joshua
2007-08-01
Twenty-four younger (18-35 years) and 24 older adult participants (65 or older) were exposed to three experimental conditions involving the memorization words and their associated contextual features, with contextual feature complexity increasing from Conditions 1 to 3. In Condition 1, words presented varied only on one binary feature (color, size, or character), while in Conditions 2 and 3, words presented varied on two and three binary features, respectively. Each condition was carried out as follows: (1) learning of a word list; (2) encoding of words and their contextual features; (3) delay; and (4) memory for contextual features through a discrimination task. Results indicated that young adults discriminated more features than older adults on all conditions. In both age groups, contextual feature discrimination accuracy decreased as the number of features increased. Moreover, older adults demonstrated near floor performance when tested with two or more binary features. We conclude that increasing context complexity strains attentional resources.
A novel processing platform for post tape out flows
NASA Astrophysics Data System (ADS)
Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.
2018-03-01
As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.
An improved thermoregulatory model for cooling garment applications with transient metabolic rates
NASA Astrophysics Data System (ADS)
Westin, Johan K.
Current state-of-the-art thermoregulatory models do not predict body temperatures with the accuracies that are required for the development of automatic cooling control in liquid cooling garment (LCG) systems. Automatic cooling control would be beneficial in a variety of space, aviation, military, and industrial environments for optimizing cooling efficiency, for making LCGs as portable and practical as possible, for alleviating the individual from manual cooling control, and for improving thermal comfort and cognitive performance. In this study, we adopt the Fiala thermoregulatory model, which has previously demonstrated state-of-the-art predictive abilities in air environments, for use in LCG environments. We validate the numerical formulation with analytical solutions to the bioheat equation, and find our model to be accurate and stable with a variety of different grid configurations. We then compare the thermoregulatory model's tissue temperature predictions with experimental data where individuals, equipped with an LCG, exercise according to a 700 W rectangular type activity schedule. The root mean square (RMS) deviation between the model response and the mean experimental group response is 0.16°C for the rectal temperature and 0.70°C for the mean skin temperature, which is within state-of-the-art variations. However, with a mean absolute body heat storage error 3¯ BHS of 9.7 W˙h, the model fails to satisfy the +/-6.5 W˙h accuracy that is required for the automatic LCG cooling control development. In order to improve model predictions, we modify the blood flow dynamics of the thermoregulatory model. Instead of using step responses to changing requirements, we introduce exponential responses to the muscle blood flow and the vasoconstriction command. We find that such modifications have an insignificant effect on temperature predictions. However, a new vasoconstriction dependency, i.e. the rate of change of hypothalamus temperature weighted by the hypothalamus error signal (DeltaThy˙ dThy/dt), proves to be an important signal that governs the thermoregulatory response during conditions of simultaneously increasing core and decreasing skin temperatures, which is a common scenario in LCG environments. With the new ?DeltaThy˙dThy /dt dependency in the vasoconstriction command, the 3¯ BHS for the exercise period is reduced by 59% (from 12.9 W˙h to 5.2 W˙h). Even though the new 3¯ BHS of 5.8 W˙h for the total activity schedule is within the target accuracy of +/-6.5 W˙h, 3¯ BHS fails to stay within the target accuracy during the entire activity schedule. With additional improvements to the central blood pool formulation, the LCG boundary condition, and the agreement between model set-points and actual experimental initial conditions, it seems possible to achieve the strict accuracy that is needed for automatic cooling control development.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Guildenbecher, Daniel R.; Cooper, Marcia A.; Sojka, Paul E.
2016-04-05
High-speed (20 kHz) digital in-line holography (DIH) is applied for 3D quantification of the size and velocity of fragments formed from the impact of a single water drop onto a thin film of water and burning aluminum particles from the combustion of a solid rocket propellant. To address the depth-of-focus problem in DIH, a regression-based multiframe tracking algorithm is employed, and out-of-plane experimental displacement accuracy is shown to be improved by an order-of-magnitude. Comparison of the results with previous DIH measurements using low-speed recording shows improved positional accuracy with the added advantage of detailed resolution of transient dynamics from singlemore » experimental realizations. Furthermore, the method is shown to be particularly advantageous for quantification of particle mass flow rates. For the investigated particle fields, the mass flows rates, which have been automatically measured from single experimental realizations, are found to be within 8% of the expected values.« less
NASA Technical Reports Server (NTRS)
Lucas, L. J.
1982-01-01
The accuracy of the Neuber equation at room temperature and 1,200 F as experimentally determined under cyclic load conditions with hold times. All strains were measured with an interferometric technique at both the local and remote regions of notched specimens. At room temperature, strains were obtained for the initial response at one load level and for cyclically stable conditions at four load levels. Stresses in notched members were simulated by subjecting smooth specimens to he same strains as were recorded on the notched specimen. Local stress-strain response was then predicted with excellent accuracy by subjecting a smooth specimen to limits established by the Neuber equation. Data at 1,200 F were obtained with the same experimental techniques but only in the cyclically stable conditions. The Neuber prediction at this temperature gave relatively accurate results in terms of predicting stress and strain points.
Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition
NASA Astrophysics Data System (ADS)
McGilvray, M.; Dann, A. G.; Jacobs, P. A.
2013-07-01
Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.
Nielsen, Niklas; Nielsen, Jorgen G.; Horsley, Alex R.
2013-01-01
Background A large body of evidence has now accumulated describing the advantages of multiple breath washout tests over conventional spirometry in cystic fibrosis (CF). Although the majority of studies have used exogenous sulphur hexafluoride (SF6) as the tracer gas this has also led to an increased interest in nitrogen washout tests, despite the differences between these methods. The impact of body nitrogen excreted across the alveoli has previously been ignored. Methods A two-compartment lung model was developed that included ventilation heterogeneity and dead space (DS) effects, but also incorporated experimental data on nitrogen excretion. The model was used to assess the impact of nitrogen excretion on washout progress and accuracy of functional residual capacity (FRC) and lung clearance index (LCI) measurements. Results Excreted nitrogen had a small effect on accuracy of FRC (1.8%) in the healthy adult model. The error in LCI calculated with true FRC was greater (6.3%), and excreted nitrogen contributed 21% of the total nitrogen concentration at the end of the washout. Increasing DS and ventilation heterogeneity both caused further increase in measurement error. LCI was increased by 6–13% in a CF child model, and excreted nitrogen increased the end of washout nitrogen concentration by 24–49%. Conclusions Excreted nitrogen appears to have complex but clinically significant effects on washout progress, particularly in the presence of abnormal gas mixing. This may explain much of the previously described differences in washout outcomes between SF6 and nitrogen. PMID:24039916
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
NASA Technical Reports Server (NTRS)
Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)
1979-01-01
A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, Dong, E-mail: d.qiu@uq.edu.au; Zhang, Mingxing
2014-08-15
A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrixmore » in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.« less
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Positional Accuracy in Optical Trap-Assisted Nanolithography
NASA Astrophysics Data System (ADS)
Arnold, Craig B.; McLeod, Euan
2009-03-01
The ability to directly print patterns on size scales below 100 nm is important for many applications where the production or repair of high resolution and density features are important. Laser-based direct-write methods have the benefit of quickly and easily being able to modify and create structures on existing devices, but feature sizes are conventionally limited by diffraction. In this presentation, we show how to overcome this limit with a new method of probe-based near-field nanopatterning in which we employ a CW laser to optically trap and manipulate dispersed microspheres against a substrate using a 2-d Bessel beam optical trap. A secondary, pulsed nanosecond laser at 355 nm is directed through the bead and used to modify the surface below the microsphere, taking advantage of the near-field enhancement in order to produce materials modification with feature sizes under 100 nm. Here, we analyze the 3-d positioning accuracy of the microsphere through analytic modeling as a function of experimental parameters. The model is verified in all directions for our experimental conditions and is used to predict the conditions required for improved positional accuracy.
NASA Astrophysics Data System (ADS)
Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun
2014-12-01
A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).
The active learning hypothesis of the job-demand-control model: an experimental examination.
Häusser, Jan Alexander; Schulz-Hardt, Stefan; Mojzisch, Andreas
2014-01-01
The active learning hypothesis of the job-demand-control model [Karasek, R. A. 1979. "Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign." Administration Science Quarterly 24: 285-307] proposes positive effects of high job demands and high job control on performance. We conducted a 2 (demands: high vs. low) × 2 (control: high vs. low) experimental office workplace simulation to examine this hypothesis. Since performance during a work simulation is confounded by the boundaries of the demands and control manipulations (e.g. time limits), we used a post-test, in which participants continued working at their task, but without any manipulation of demands and control. This post-test allowed for examining active learning (transfer) effects in an unconfounded fashion. Our results revealed that high demands had a positive effect on quantitative performance, without affecting task accuracy. In contrast, high control resulted in a speed-accuracy tradeoff, that is participants in the high control conditions worked slower but with greater accuracy than participants in the low control conditions.
Screening for Learning and Memory Mutations: A New Approach.
Gallistel, C R; King, A P; Daniel, A M; Freestone, D; Papachristos, E B; Balci, F; Kheifets, A; Zhang, J; Su, X; Schiff, G; Kourtev, H
2010-01-30
We describe a fully automated, live-in 24/7 test environment, with experimental protocols that measure the accuracy and precision with which mice match the ratio of their expected visit durations to the ratio of the incomes obtained from two hoppers, the progress of instrumental and classical conditioning (trials-to-acquisition), the accuracy and precision of interval timing, the effect of relative probability on the choice of a timed departure target, and the accuracy and precision of memory for the times of day at which food is available. The system is compact; it obviates the handling of the mice during testing; it requires negligible amounts of experimenter/technician time; and it delivers clear and extensive results from 3 protocols within a total of 7-9 days after the mice are placed in the test environment. Only a single 24-hour period is required for the completion of first protocol (the matching protocol), which is strong test of temporal and spatial estimation and memory mechanisms. Thus, the system permits the extensive screening of many mice in a short period of time and in limited space. The software is publicly available.
Accuracy evaluation of optical distortion calibration by digital image correlation
NASA Astrophysics Data System (ADS)
Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan
2017-11-01
Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.
NASA Astrophysics Data System (ADS)
Sang, Xiahan
Intermetallics offer unique property combinations often superior to those of more conventional solid solution alloys of identical composition. Understanding of bonding in intermetallics would greatly accelerate development of intermetallics for advanced and high performance engineering applications. Tetragonal intermetallics L10 ordered TiAl, FePd and FePt are used as model systems to experimentally measure their electron densities using quantitative convergent beam electron diffraction (QCBED) method and then compare details of the 3d-4d (FePd) and 3d-5d (FePt) electron interactions to elucidate their role on properties of the respective ferromagnetic L10-ordered intermetallics FePd and FePt. A new multi-beam off-zone axis condition QCBED method has been developed to increase sensitivity of CBED patterns to change of structure factors and the anisotropic Debye-Waller (DW) factors. Unprecedented accuracy and precision in structure and DW factor measurements has been achieved by acquiring CBED patterns using beam-sample geometry that ensures strong dynamical interaction between the fast electrons and the periodic potential in the crystalline samples. This experimental method has been successfully applied to diamond cubic Si, and chemically ordered B2 cubic NiAl, tetragonal L10 ordered TiAl and FePd. The accurate and precise experimental DW and structure factors for L10 TiAl and FePd allow direct evaluation of computer calculations using the current state of the art density functional theory (DFT) based electron structure modeling. The experimental electron density difference map of L1 0 TiAl shows that the DFT calculations describe bonding to a sufficient accuracy for s- and p- electrons interaction, e. g., the Al-layer. However, it indicate significant quantitative differences to the experimental measurements for the 3d-3d interactions of the Ti atoms, e.g. in the Ti layers. The DFT calculations for L10 FePd also show that the current DFT approximations insufficiently describe the interaction between Fe-Fe (3d-3d), Fe-Pd (3d-4d) and Pd-Pd (4d-4d) electrons, which indicates the necessity to evaluate applicability of different DFT approximations, and also provides experimental data for the development of new DFT approximation that better describes transition metal based intermetallic systems.
Chénier, Félix; Aissaoui, Rachid; Gauthier, Cindy; Gagnon, Dany H
2017-02-01
The commercially available SmartWheel TM is largely used in research and increasingly used in clinical practice to measure the forces and moments applied on the wheelchair pushrims by the user. However, in some situations (i.e. cambered wheels or increased pushrim weight), the recorded kinetics may include dynamic offsets that affect the accuracy of the measurements. In this work, an automatic method to identify and cancel these offsets is proposed and tested. First, the method was tested on an experimental bench with different cambers and pushrim weights. Then, the method was generalized to wheelchair propulsion. Nine experienced wheelchair users propelled their own wheelchairs instrumented with two SmartWheels with anti-slip pushrim covers. The dynamic offsets were correctly identified using the propulsion acquisition, without needing a separate baseline acquisition. A kinetic analysis was performed with and without dynamic offset cancellation using the proposed method. The most altered kinetic variables during propulsion were the vertical and total forces, with errors of up to 9N (p<0.001, large effect size of 5). This method is simple to implement, fully automatic and requires no further acquisitions. Therefore, we advise to use it systematically to enhance the accuracy of existing and future kinetic measurements. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Turbine blade tip clearance measurements using skewed dual optical beams of tip timing
NASA Astrophysics Data System (ADS)
Ye, De-chao; Duan, Fa-jie; Guo, Hao-tian; Li, Yangzong; Wang, Kai
2011-12-01
Optimization and active control of the clearance between turbine blades and case of the engine is identified, especially in aerospace community, as a key technology to increase engine efficiency, reduce fuel consumption and emissions and increase service life .However, the tip clearance varies during different operating conditions. Thus a reliable non-contact and online detection system is essential and ultimately used to close the tip clearance control loop. This paper described a fiber optical clearance measuring system applying skewed dual optical beams to detect the traverse time of passing blades. Two beams were specially designed with an outward angle of 18 degree and the beam spot diameters are less than 100μm within 0-4mm working range to achieve high signal-to-noise and high sensitivity. It could be theoretically analyzed that the measuring accuracy is not compromised by degradation of signal intensity caused by any number of environmental conditions such as light source instability, contamination and blade tip imperfection. Experimental tests were undertaken to achieve a high resolution of 10µm in the rotational speed range 2000-18000RPM and a measurement accuracy of 15μm, indicating that the system is capable of providing accurate and reliable data for active clearance control (ACC).
Nonintrusive, multipoint velocity measurements in high-pressure combustion flows
NASA Technical Reports Server (NTRS)
Allen, M.; Davis, S.; Kessler, W.; Legner, H.; Mcmanus, K.; Mulhall, P.; Parker, T.; Sonnenfroh, D.
1993-01-01
A combined experimental and analytical effort was conducted to demonstrate the applicability of OH Doppler-shifted fluorescence imaging of velocity distributions in supersonic combustion gases. The experiments were conducted in the underexpanded exhaust flow from a 6.8 atm, 2400 K, H2-O2-N2 burner exhausting into the atmosphere. In order to quantify the effects of in-plane variations of the gas thermodynamic properties on the measurement accuracy, a set of detailed measurements of the OH (1,0) band collisional broadening and shifting in H2-air gases was produced. The effect of pulse-to-pulse variations in the dye laser bandshape was also examined in detail and a modification was developed which increased in the single pulse bandwidth, thereby increasing the intraimage velocity dynamic range as well as reducing the sensitivity of the velocity measurement to the gas property variations. Single point and imaging measurements of the velocity field in the exhaust flowfield were compared with 2D, finite-rate kinetics simulations of the flowfield. Relative velocity accuracies of +/- 50 m/s out of 1600 m/s were achieved in time-averaged imaging measurements of the flow over an order of magnitude variation in pressure and a factor of two variation in temperature.
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network
2018-01-01
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved. PMID:29439500
NASA Astrophysics Data System (ADS)
Zhu, Ying; Fearn, Tom; MacKenzie, Gary; Clark, Ben; Dunn, Jason M.; Bigio, Irving J.; Bown, Stephen G.; Lovat, Laurence B.
2009-07-01
Elastic scattering spectroscopy (ESS) may be used to detect high-grade dysplasia (HGD) or cancer in Barrett's esophagus (BE). When spectra are measured in vivo by a hand-held optical probe, variability among replicated spectra from the same site can hinder the development of a diagnostic model for cancer risk. An experiment was carried out on excised tissue to investigate how two potential sources of this variability, pressure and angle, influence spectral variability, and the results were compared with the variations observed in spectra collected in vivo from patients with Barrett's esophagus. A statistical method called error removal by orthogonal subtraction (EROS) was applied to model and remove this measurement variability, which accounted for 96.6% of the variation in the spectra, from the in vivo data. Its removal allowed the construction of a diagnostic model with specificity improved from 67% to 82% (with sensitivity fixed at 90%). The improvement was maintained in predictions on an independent in vivo data set. EROS works well as an effective pretreatment for Barrett's in vivo data by identifying measurement variability and ameliorating its effect. The procedure reduces the complexity and increases the accuracy and interpretability of the model for classification and detection of cancer risk in Barrett's esophagus.
Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E
2011-06-01
Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.
Experimental Studies on the Mechanical Behaviour of Rock Joints with Various Openings
NASA Astrophysics Data System (ADS)
Li, Y.; Oh, J.; Mitra, R.; Hebblewhite, B.
2016-03-01
The mechanical behaviour of rough joints is markedly affected by the degree of joint opening. A systematic experimental study was conducted to investigate the effect of the initial opening on both normal and shear deformations of rock joints. Two types of joints with triangular asperities were produced in the laboratory and subjected to compression tests and direct shear tests with different initial opening values. The results showed that opened rock joints allow much greater normal closure and result in much lower normal stiffness. A semi-logarithmic law incorporating the degree of interlocking is proposed to describe the normal deformation of opened rock joints. The proposed equation agrees well with the experimental results. Additionally, the results of direct shear tests demonstrated that shear strength and dilation are reduced because of reduced involvement of and increased damage to asperities in the process of shearing. The results indicate that constitutive models of rock joints that consider the true asperity contact area can be used to predict shear resistance along opened rock joints. Because rock masses are loosened and rock joints become open after excavation, the model suggested in this study can be incorporated into numerical procedures such as finite-element or discrete-element methods. Use of the model could then increase the accuracy and reliability of stability predictions for rock masses under excavation.
Pramipexole-induced disruption of behavioral processes fundamental to intertemporal choice.
Johnson, Patrick S; Stein, Jeffrey S; Smits, Rochelle R; Madden, Gregory J
2013-05-01
Evaluating the effects of presession drug administration on intertemporal choice in nonhumans is a useful approach for identifying compounds that promote impulsive behavior in clinical populations, such as those prescribed the dopamine agonist pramipexole (PPX). Based on the results of previous studies, it is unclear whether PPX increases rats' impulsive choice or attenuates aspects of stimulus control. The present study was designed to experimentally isolate behavioral processes fundamental to intertemporal choice and challenge them pharmacologically with PPX administration. In Experiment 1, the hypothesis that PPX increases impulsive choice as a result of enhanced sensitivity to reinforcer delays was tested and disconfirmed. That is, acute PPX diminished delay sensitivity in a manner consistent with disruption of stimulus control whereas repeated PPX had no effect on delay sensitivity. Experiments 2 and 3 elaborated upon this finding by examining the effects of repeated PPX on rats' discrimination of response-reinforcer contingencies and reinforcer amounts, respectively. Accuracy of both discriminations was reduced by PPX. Collectively these results provide no support for past studies that have suggested PPX increases impulsive choice. Instead, PPX impairs stimulus control over choice behavior. The behavioral approach adopted herein could be profitably integrated with genetic and other biobehavioral models to advance our understanding of impulsive behavior associated with drug administration. © Society for the Experimental Analysis of Behavior.
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
Zhang, Tao; Jiang, Feng; Yan, Lan; Xu, Xipeng
2017-12-26
The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM) model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE) simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature.
Zhang, Tao; Jiang, Feng; Yan, Lan; Xu, Xipeng
2017-01-01
The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM) model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE) simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature. PMID:29278398
Caselli, Federica; Bisegna, Paolo
2017-10-01
The performance of a novel microfluidic impedance cytometer (MIC) with coplanar configuration is investigated in silico. The main feature of the device is the ability to provide accurate particle-sizing despite the well-known measurement sensitivity to particle trajectory. The working principle of the device is presented and validated by means of an original virtual laboratory providing close-to-experimental synthetic data streams. It is shown that a metric correlating with particle trajectory can be extracted from the signal traces and used to compensate the trajectory-induced error in the estimated particle size, thus reaching high-accuracy. An analysis of relevant parameters of the experimental setup is also presented. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Xilun; Wang, Xiangchuan; Pan, Shilong
2017-03-01
An implementation of a distance measurement system using double-sideband with suppressed carrier modulation (DSB-SC) frequency scanning interferometry is proposed to reduce the variations in the optical path and improve the measurement accuracy. In this proposed system, the electro-optic DSB-SC is used to create dual-swept signals with opposite scanning directions. For each swept signal, the relative distance between the reference arm and the measuring arm is determined by the beat frequency of signals from two arms. By multiplying both beat signals, measurement errors caused by variations in the optical path can be greatly reduced. As an experimental demonstration, a vibration was introduced in the optical path length. The experimental results show that the variations can be suppressed for over 19.9 dB.
Fournier, K B; Brown, C G; Yeoman, M F; Fisher, J H; Seiler, S W; Hinshelwood, D; Compton, S; Holdener, F R; Kemp, G E; Newlander, C D; Gilliam, R P; Froula, N; Lilly, M; Davis, J F; Lerch, Maj A; Blue, B E
2016-11-01
Our team has developed an experimental platform to evaluate the x-ray-generated stress and impulse in materials. Experimental activities include x-ray source development, design of the sample mounting hardware and sensors interfaced to the National Ignition Facility's diagnostics insertion system, and system integration into the facility. This paper focuses on the X-ray Transport and Radiation Response Assessment (XTRRA) test cassettes built for these experiments. The test cassette is designed to position six samples at three predetermined distances from the source, each known to within ±1% accuracy. Built-in calorimeters give in situ measurements of the x-ray environment along the sample lines of sight. The measured accuracy of sample responses as well as planned modifications to the XTRRA cassette is discussed.
Accuracy in streamflow measurements on the Fernow Experimental Forest
James W. Hornbeck
1965-01-01
Measurement of streamflow from small watersheds on the Fernow Experimental Forest at Parsons, West Virginia was begun in 1951. Stream-gaging stations are now being operated on 9 watersheds ranging from 29 to 96 acres in size; and 91 watershed-years of record have been collected. To determine how accurately streamflow is being measured at these stations, several of the...
Parametric studies and characterization measurements of x-ray lithography mask membranes
NASA Astrophysics Data System (ADS)
Wells, Gregory M.; Chen, Hector T. H.; Engelstad, Roxann L.; Palmer, Shane R.
1991-08-01
The techniques used in the experimental characterization of thin membranes are considered for their potential use as mask blanks for x-ray lithography. Among the parameters of interest for this evaluation are the film's stress, fracture strength, uniformity of thickness, absorption in the x-ray and visible spectral regions and the modulus and grain structure of the material. The experimental techniques used for measuring these properties are described. The accuracy and applicability of the assumptions used to derive the formulas that relate the experimental measurements to the parameters of interest are considered. Experimental results for silicon carbide and diamond films are provided. Another characteristic needed for an x-ray mask carrier is radiation stability. The number of x-ray exposures expected to be performed in the lifetime of an x-ray mask on a production line is on the order of 107. The dimensional stability requirements placed on the membranes during this period are discussed. Interferometric techniques that provide sufficient sensitivity for these stability measurements are described. A comparison is made between the different techniques that have been developed in term of the information that each technique provides, the accuracy of the various techniques, and the implementation issues that are involved with each technique.
Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne‐Marie; Bouvier, Michel
2017-01-01
Abstract Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype–phenotype relationship. PMID:28230923
Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne-Marie; Bouvier, Michel; Lichtarge, Olivier
2017-05-01
Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype-phenotype relationship. © 2017 The Authors. **Human Mutation published by Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Maddalon, Dal V.
1998-01-01
Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.
Empirical evaluation of data normalization methods for molecular classification.
Huang, Huei-Chung; Qin, Li-Xuan
2018-01-01
Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.
Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika
2013-03-01
Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.
Ley-Bosch, Carlos; Quintana-Suárez, Miguel A.
2018-01-01
Indoor localization estimation has become an attractive research topic due to growing interest in location-aware services. Many research works have proposed solving this problem by using wireless communication systems based on radiofrequency. Nevertheless, those approaches usually deliver an accuracy of up to two metres, since they are hindered by multipath propagation. On the other hand, in the last few years, the increasing use of light-emitting diodes in illumination systems has provided the emergence of Visible Light Communication technologies, in which data communication is performed by transmitting through the visible band of the electromagnetic spectrum. This brings a brand new approach to high accuracy indoor positioning because this kind of network is not affected by electromagnetic interferences and the received optical power is more stable than radio signals. Our research focus on to propose a fingerprinting indoor positioning estimation system based on neural networks to predict the device position in a 3D environment. Neural networks are an effective classification and predictive method. The localization system is built using a dataset of received signal strength coming from a grid of different points. From the these values, the position in Cartesian coordinates (x,y,z) is estimated. The use of three neural networks is proposed in this work, where each network is responsible for estimating the position by each axis. Experimental results indicate that the proposed system leads to substantial improvements to accuracy over the widely-used traditional fingerprinting methods, yielding an accuracy above 99% and an average error distance of 0.4 mm. PMID:29601525
Classifying four-category visual objects using multiple ERP components in single-trial ERP.
Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin
2016-08-01
Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks
Khan, Komal Saifullah; Tariq, Muhammad
2014-01-01
Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739
Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Prpic, Tomislav; Markovic, Goran
2016-02-01
The aim of the present study was to examine the acute effects of graded physiological strain on soccer kicking performance. Twenty-eight semi-professional soccer players completed both experimental and control procedure. The experimental protocol incorporated repeated shooting trials combined with a progressive discontinuous maximal shuttle-run intervention. The initial running velocity was 8 km/h and increasing for 1 km/h every 3 min until exhaustion. The control protocol comprised only eight subsequent shooting trials. The soccer-specific kicking accuracy (KA; average distance from the ball-entry point to the goal center), kicking velocity (KV), and kicking quality (KQ; kicking accuracy divided by the time elapsed from hitting the ball to the point of entry) were evaluated via reproducible and valid test over five individually determined exercise intensity zones. Compared with baseline or exercise at intensities below the second lactate threshold (LT2), physiological exertion above the LT2 (blood lactate > 4 mmol/L) resulted in meaningful decrease in KA (11-13%; p < 0.05), KV (3-4%; p < 0.05), and overall KQ (13-15%; p < 0.01). The light and moderate-intensity exercise below the LT2 had no significant effect on soccer kicking performance. The results suggest that high-intensity physiological exertion above the player's LT2 impairs soccer kicking performance. In contrast, light to moderate physiological stress appears to be neither harmful nor beneficial for kicking performance.
High accuracy operon prediction method based on STRING database scores.
Taboada, Blanca; Verde, Cristina; Merino, Enrique
2010-07-01
We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Protein subcellular localization prediction using artificial intelligence technology.
Nair, Rajesh; Rost, Burkhard
2008-01-01
Proteins perform many important tasks in living organisms, such as catalysis of biochemical reactions, transport of nutrients, and recognition and transmission of signals. The plethora of aspects of the role of any particular protein is referred to as its "function." One aspect of protein function that has been the target of intensive research by computational biologists is its subcellular localization. Proteins must be localized in the same subcellular compartment to cooperate toward a common physiological function. Aberrant subcellular localization of proteins can result in several diseases, including kidney stones, cancer, and Alzheimer's disease. To date, sequence homology remains the most widely used method for inferring the function of a protein. However, the application of advanced artificial intelligence (AI)-based techniques in recent years has resulted in significant improvements in our ability to predict the subcellular localization of a protein. The prediction accuracy has risen steadily over the years, in large part due to the application of AI-based methods such as hidden Markov models (HMMs), neural networks (NNs), and support vector machines (SVMs), although the availability of larger experimental datasets has also played a role. Automatic methods that mine textual information from the biological literature and molecular biology databases have considerably sped up the process of annotation for proteins for which some information regarding function is available in the literature. State-of-the-art methods based on NNs and HMMs can predict the presence of N-terminal sorting signals extremely accurately. Ab initio methods that predict subcellular localization for any protein sequence using only the native amino acid sequence and features predicted from the native sequence have shown the most remarkable improvements. The prediction accuracy of these methods has increased by over 30% in the past decade. The accuracy of these methods is now on par with high-throughput methods for predicting localization, and they are beginning to play an important role in directing experimental research. In this chapter, we review some of the most important methods for the prediction of subcellular localization.
A resource for benchmarking the usefulness of protein structure models.
Carbajo, Daniel; Tramontano, Anna
2012-08-02
Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by non-academics: No.
Maguire, DR; Henson, C
2016-01-01
Background and Purpose Repeated administration of a μ opioid receptor agonist can enhance some forms of impulsivity, such as delay discounting. However, it is unclear whether repeated administration alters motor impulsivity. Experimental Approach We examined the effects of acute administration of morphine and amphetamine prior to and during daily morphine administration in rats responding under a five‐choice serial reaction time task. Rats (n = 5) were trained to detect a brief flash of light presented randomly in one of five response holes; responding in the target hole delivered food, whereas responding in the wrong hole or responding prior to illumination of the target stimulus (premature response) initiated a timeout. Premature responding served as an index of motor impulsivity. Key Results Administered acutely, morphine (0.1–10 mg·kg−1, i.p.) increased omissions and modestly, although not significantly, premature responding without affecting response accuracy; amphetamine (0.1–1.78 mg·kg−1, i.p.) increased premature responding without changing omissions or response accuracy. After 3 weeks of 10 mg·kg−1·day−1 morphine, tolerance developed to its effects on omissions whereas premature responding increased approximately fourfold, compared with baseline. Effects of amphetamine were not significantly affected by daily morphine administration. Conclusions and Implications These data suggest that repeated administration of morphine increased effects of morphine on motor impulsivity, although tolerance developed to other effects, such as omissions. To the extent that impulsivity is a risk factor for drug abuse, repeated administration of μ opioid receptor agonists, for recreational or therapeutic purposes, might increase impulsivity and thus the risk for drug abuse. PMID:26776751
The Effect of Timbre and Vibrato on Vocal Pitch Matching Accuracy
NASA Astrophysics Data System (ADS)
Duvvuru, Sirisha
Research has shown that singers are better able to match pitch when the target stimulus has a timbre close to their own voice. This study seeks to answer the following questions: (1) Do classically trained female singers more accurately match pitch when the target stimulus is more similar to their own timbre? (2) Does the ability to match pitch vary with increasing pitch? (3) Does the ability to match pitch differ depending on whether the target stimulus is produced with or without vibrato? (4) Are mezzo sopranos less accurate than sopranos?
Beran, Gregory J O; Hartman, Joshua D; Heit, Yonaton N
2016-11-15
Molecular crystals occur widely in pharmaceuticals, foods, explosives, organic semiconductors, and many other applications. Thanks to substantial progress in electronic structure modeling of molecular crystals, attention is now shifting from basic crystal structure prediction and lattice energy modeling toward the accurate prediction of experimentally observable properties at finite temperatures and pressures. This Account discusses how fragment-based electronic structure methods can be used to model a variety of experimentally relevant molecular crystal properties. First, it describes the coupling of fragment electronic structure models with quasi-harmonic techniques for modeling the thermal expansion of molecular crystals, and what effects this expansion has on thermochemical and mechanical properties. Excellent agreement with experiment is demonstrated for the molar volume, sublimation enthalpy, entropy, and free energy, and the bulk modulus of phase I carbon dioxide when large basis second-order Møller-Plesset perturbation theory (MP2) or coupled cluster theories (CCSD(T)) are used. In addition, physical insight is offered into how neglect of thermal expansion affects these properties. Zero-point vibrational motion leads to an appreciable expansion in the molar volume; in carbon dioxide, it accounts for around 30% of the overall volume expansion between the electronic structure energy minimum and the molar volume at the sublimation point. In addition, because thermal expansion typically weakens the intermolecular interactions, neglecting thermal expansion artificially stabilizes the solid and causes the sublimation enthalpy to be too large at higher temperatures. Thermal expansion also frequently weakens the lower-frequency lattice phonon modes; neglecting thermal expansion causes the entropy of sublimation to be overestimated. Interestingly, the sublimation free energy is less significantly affected by neglecting thermal expansion because the systematic errors in the enthalpy and entropy cancel somewhat. Second, because solid state nuclear magnetic resonance (NMR) plays an increasingly important role in molecular crystal studies, this Account discusses how fragment methods can be used to achieve higher-accuracy chemical shifts in molecular crystals. Whereas widely used plane wave density functional theory models are largely restricted to generalized gradient approximation (GGA) functionals like PBE in practice, fragment methods allow the routine use of hybrid density functionals with only modest increases in computational cost. In extensive molecular crystal benchmarks, hybrid functionals like PBE0 predict chemical shifts with 20-30% higher accuracy than GGAs, particularly for 1 H, 13 C, and 15 N nuclei. Due to their higher sensitivity to polarization effects, 17 O chemical shifts prove slightly harder to predict with fragment methods. Nevertheless, the fragment model results are still competitive with those from GIPAW. The improved accuracy achievable with fragment approaches and hybrid density functionals increases discrimination between different potential assignments of individual shifts or crystal structures, which is critical in NMR crystallography applications. This higher accuracy and greater discrimination are highlighted in application to the solid state NMR of different acetaminophen and testosterone crystal forms.
Photo-Assisted Epitaxial Growth for III-V Semiconductors
1993-02-01
interferometric technique with an accuracy of ±3 "C. The MOMBE growth of GaAs, InAs, and InGaAs was first studied, by monitoring intensity oscillations of...temperatures. In Section 2.1, we report the use of an infrared laser interferometric technique to calibrate the substrate temperature with a higher accuracy...of AO as a function of AT is not feasible. Therefore, we calibrated the dependence of AO on AT experimentally (the dependence of the interferometric
2013-08-16
approach in the context of a novel, immunologically relevant antigen. The limited accuracy of the tested algorithms to predict the in vivo immune responses...overlapping peptides spanning the entire sequence are individually tested for antibody interacting residues. Conformational B cell epitopes, in contrast...a blind assessment of this approach in the context of a novel, immunologically relevant antigen. The limited accuracy of the tested algorithms to
NASA Astrophysics Data System (ADS)
Bižić, Milan B.; Petrović, Dragan Z.; Tomić, Miloš C.; Djinović, Zoran V.
2017-07-01
This paper presents the development of a unique method for experimental determination of wheel-rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Тhе obtained results have shown that the developed method enables measurement of vertical and lateral wheel-rail contact forces Q and Y and their ratio Y/Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y/Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel-rail contact forces and contact point position using IWS.
Hu, Jiawen; Duan, Zhenhao; Zhu, Chen; Chou, I.-Ming
2007-01-01
Evaluation of CO2 sequestration in formation brine or in seawater needs highly accurate experimental data or models of pressure–volume–temperature-composition (PVTx) properties for the CO2–H2O and CO2–H2O–NaCl systems. This paper presents a comprehensive review of the experimental PVTx properties and the thermodynamic models of these two systems. The following conclusions are drawn from the review: (1) About two-thirds of experimental data are consistent with each other, where the uncertainty in liquid volumes is within 0.5%, and that in gas volumes within 2%. However, this accuracy is not sufficient for assessing CO2 sequestration. Among the data sets for liquids, only a few are available for accurate modeling of CO2 sequestration. These data have an error of about 0.1% on average, roughly covering from 273 to 642 K and from 1 to 35 MPa; (2) There is a shortage of volumetric data of saturated vapor phase. (3) There are only a few data sets for the ternary liquids, and they are inconsistent with each other, where only a couple of data sets can be used to test a predictive density model for CO2 sequestration; (4) Although there are a few models with accuracy close to that of experiments, none of them is accurate enough for CO2 sequestration modeling, which normally needs an accuracy of density better than 0.1%. Some calculations are made available on www.geochem-model.org.
Piezoresistive position microsensors with ppm-accuracy
NASA Astrophysics Data System (ADS)
Stavrov, Vladimir; Shulev, Assen; Stavreva, Galina; Todorov, Vencislav
2015-05-01
In this article, the relation between position accuracy and the number of simultaneously measured values, such as coordinates, has been analyzed. Based on this, a conceptual layout of MEMS devices (microsensors) for multidimensional position monitoring comprising a single anchored and a single actuated part has been developed. Both parts are connected with a plurality of micromechanical flexures, and each flexure includes position detecting cantilevers. Microsensors having detecting cantilevers oriented in X and Y direction have been designed and prototyped. Experimentally measured results at characterization of 1D, 2D and 3D position microsensors are reported as well. Exploiting different flexure layouts, a travel range between 50μm and 1.8mm and sensors' sensitivity in the range between 30μV/μm and 5mV/μm@ 1V DC supply voltage have been demonstrated. A method for accurate calculation of all three Cartesian coordinates, based on measurement of at least three microsensors' signals has also been described. The analyses of experimental results prove the capability of position monitoring with ppm-(part per million) accuracy. The technology for fabrication of MEMS devices with sidewall embedded piezoresistors removes restrictions in strong improvement of their usability for position sensing with a high accuracy. The present study is, also a part of a common strategy for developing a novel MEMS-based platform for simultaneous accurate measurement of various physical values when they are transduced to a change of position.