Sample records for accuracy liga theory

  1. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately

  2. Predictions of ground states of LiGa and NaGa

    NASA Astrophysics Data System (ADS)

    Boldyrev, Alexander I.; Simons, Jack

    1996-11-01

    The ground and very low-lying excited states of LiGa and NaGa have been studied using high level ab initio techniques. At the QCISD(T)/6-311 + G(2df) level of theory, the 1Σ + state was found to be the most stable for both molecules. The equilibrium bond lengths and dissociation energies were found to be: R( LiGa) = 2.865 Å and D0(LiGa) = 22.3 kcal/mol and R( NaGa) = 3.174 Å and D0(NaGa) = 17.1 kcal/mol. Trends within the ground electronic states of LiB, NaB, LiAl, NaAl, LiGa and NaGa are discussed and predictions for related AlkM (Alk LiCs and MBTl) species are made.

  3. The Covidien LigaSure Maryland Jaw Device.

    PubMed

    Zaidi, Nisar; Glover, Anthony R; Sidhu, Stanley B

    2015-03-01

    Since its invention nearly 20 years ago, the Covidien LigaSure device along with its ForceTriad generator has dominated the Electrothermal Bipolar Vessel Sealing market. The LigaSure was used for surgical procedures, both open and laparoscopic. The purpose of this review is to provide evidence of the safety and utility of the LigaSure device compared to more traditional means of hemostasis and its ultrasonic competitor, particularly in laparoscopic applications. We will provide evidence related to electrothermal bipolar vessel sealing in general and look specifically at Covidien's newest product, the LigaSure Maryland Jaw Device.

  4. Microfabrication: LIGA-X and applications

    NASA Astrophysics Data System (ADS)

    Kupka, R. K.; Bouamrane, F.; Cremers, C.; Megtert, S.

    2000-09-01

    X-ray LIGA (Lithography, Electrogrowth, Moulding) is one of today's key technologies in microfabrication and upcoming modern (meso)-(nano) fabrication, already used and anticipated for micromechanics (micromotors, microsensors, spinnerets, etc.), micro-optics, micro-hydrodynamics (fluidic devices), microbiology, in medicine, in biology, and in chemistry for microchemical reactors. It compares to micro-electromechanical systems (MEMS) technology, offering a larger, non-silicon choice of materials and better inherent precision. X-ray LIGA relies on synchrotron radiation to obtain necessary X-ray fluxes and uses X-ray proximity printing. Inherent advantages are its extreme precision, depth of field and very low intrinsic surface roughness. However, the quality of fabricated structures often depends on secondary effects during exposure and effects like resist adhesion. UV-LIGA, relying on thick UV resists is an alternative for projects requiring less precision. Modulating the spectral properties of synchrotron radiation, different regimes of X-ray lithography lead to (a) the mass-fabrication of classical nanostructures, (b) the fabrication of high aspect ratio nanostructures (HARNST), (c) the fabrication of high aspect ratio microstructures (HARMST), and (d) the fabrication of high aspect ratio centimeter structures (HARCST). Reviewing very recent activities around X-ray LIGA, we show the versatility of the method, obviously finding its region of application there, where it is best and other competing microtechnologies are less advantageous. An example of surface-based X-ray and particle lenses (orthogonal reflection optics (ORO)) made by X-ray LIGA is given.

  5. Total x-ray power measurements in the Sandia LIGA program.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Michael E.; Ting, Aili

    2005-08-01

    Total X-ray power measurements using aluminum block calorimetry and other techniques were made at LIGA X-ray scanner synchrotron beamlines located at both the Advanced Light Source (ALS) and the Advanced Photon Source (APS). This block calorimetry work was initially performed on the LIGA beamline 3.3.1 of the ALS to provide experimental checks of predictions of the LEX-D (LIGA Exposure- Development) code for LIGA X-ray exposures, version 7.56, the version of the code in use at the time calorimetry was done. These experiments showed that it was necessary to use bend magnet field strengths and electron storage ring energies different frommore » the default values originally in the code in order to obtain good agreement between experiment and theory. The results indicated that agreement between LEX-D predictions and experiment could be as good as 5% only if (1) more accurate values of the ring energies, (2) local values of the magnet field at the beamline source point, and (3) the NIST database for X-ray/materials interactions were used as code inputs. These local magnetic field value and accurate ring energies, together with NIST database, are now defaults in the newest release of LEX-D, version 7.61. Three dimensional simulations of the temperature distributions in the aluminum calorimeter block for a typical ALS power measurement were made with the ABAQUS code and found to be in good agreement with the experimental temperature data. As an application of the block calorimetry technique, the X-ray power exiting the mirror in place at a LIGA scanner located at the APS beamline 10 BM was measured with a calorimeter similar to the one used at the ALS. The overall results at the APS demonstrated the utility of calorimetry in helping to characterize the total X-ray power in LIGA beamlines. In addition to the block calorimetry work at the ALS and APS, a preliminary comparison of the use of heat flux sensors, photodiodes and modified beam calorimeters as total X-ray power

  6. Optical properties of LiGaS2: an ab initio study and spectroscopic ellipsometry measurement

    NASA Astrophysics Data System (ADS)

    Atuchin, V. V.; Lin, Z. S.; Isaenko, L. I.; Kesler, V. G.; Kruchinin, V. N.; Lobanov, S. I.

    2009-11-01

    Electronic and optical properties of lithium thiogallate crystal, LiGaS2, have been investigated by both experimental and theoretical methods. The plane-wave pseudopotential method based on DFT theory has been used for band structure calculations. The electronic parameters of Ga 3d orbitals have been corrected by the DFT+U methods to be consistent with those measured with x-ray photoemission spectroscopy. Evolution of optical constants of LiGaS2 over a wide spectral range was determined by developed first-principles theory and dispersion curves were compared with optical parameters defined by spectroscopic ellipsometry in the photon energy range 1.2-5.0 eV. Good agreement has been achieved between theoretical and experimental results.

  7. Safety and efficacy of LigaSure usage in pancreaticoduodenectomy

    PubMed Central

    Eng, Oliver S; Goswami, Julie; Moore, Dirk; Chen, Chunxia; Brumbaugh, Jennifer; Gannon, Christopher J; August, David A; Carpizo, Darren R

    2013-01-01

    Background Over recent years, use of the LigaSure™ vessel sealing device has increased in major abdominal surgery to include pancreaticoduodenectomy (PD). LigaSure™ use during PD has expanded to include all steps of the procedure, including the division of the uncinate margin. This introduces the potential for thermal major vascular injury or margin positivity. The aim of the present study was to evaluate the safety and efficacy of LigaSure™ usage in PD in comparison to established dissection techniques. Methods One hundred and forty-eight patients who underwent PD from 2007 to 2012 at Robert Wood Johnson University Hospital were identified from a retrospective database. Two groups were recognized: those in which the LigaSure™ device was used (N = 114), and in those it was not (N = 34). Peri-operative outcomes were compared. Results Vascular intra-operative complications directly caused by thermal injury from LigaSure™ use occurred in 1.8% of patients. Overall vascular intra-operative complications, uncinate margin positivity, blood loss, length of stay, and complication severity were not significantly different between groups. The mean operative time was 77 min less (P < 0.010) in the LigaSure™ group. Savings per case where the LigaSure™ was used amounted to $1776.73. Conclusion LigaSure™ usage during PD is safe and effective. It is associated with decreased operative times, which may decrease operative costs in PD. PMID:23782268

  8. Safety and efficacy of LigaSure usage in pancreaticoduodenectomy.

    PubMed

    Eng, Oliver S; Goswami, Julie; Moore, Dirk; Chen, Chunxia; Brumbaugh, Jennifer; Gannon, Christopher J; August, David A; Carpizo, Darren R

    2013-10-01

    Over recent years, use of the LigaSure™ vessel sealing device has increased in major abdominal surgery to include pancreaticoduodenectomy (PD). LigaSure™ use during PD has expanded to include all steps of the procedure, including the division of the uncinate margin. This introduces the potential for thermal major vascular injury or margin positivity. The aim of the present study was to evaluate the safety and efficacy of LigaSure™ usage in PD in comparison to established dissection techniques. One hundred and forty-eight patients who underwent PD from 2007 to 2012 at Robert Wood Johnson University Hospital were identified from a retrospective database. Two groups were recognized: those in which the LigaSure™ device was used (N = 114), and in those it was not (N = 34). Peri-operative outcomes were compared. Vascular intra-operative complications directly caused by thermal injury from LigaSure™ use occurred in 1.8% of patients. Overall vascular intra-operative complications, uncinate margin positivity, blood loss, length of stay, and complication severity were not significantly different between groups. The mean operative time was 77 min less (P < 0.010) in the LigaSure™ group. Savings per case where the LigaSure™ was used amounted to $1776.73. LigaSure™ usage during PD is safe and effective. It is associated with decreased operative times, which may decrease operative costs in PD. © 2013 International Hepato-Pancreato-Biliary Association.

  9. The use of LigaSure in patients with hyperthyroidism.

    PubMed

    Barbaros, Umut; Erbil, Yeşim; Bozbora, Alp; Deveci, Uğur; Aksakal, Nihat; Dinççağ, Ahmet; Ozarmağan, Selçuk

    2006-11-01

    Thyroidectomies of hyperthyroidic patients are known to be more blood-spattered than the operations performed for euthyroid nodular diseases and require careful hemostasis. Our purpose was to evaluate the efficacy of the use of LigaSure in patients with hyperthyroidism. Between January 2004 and October 2005, 100 patients underwent total or near-total thyroidectomy. Bipolar vessel ligation system (LigaSure) was the choice of modality for hemostasis in half of these patients, and the conventional suture ligation technique was used for the rest. The following data were evaluated non-randomized and prospectively in this study: patients demographics, thyroid pathology, operative duration, presence of complications, and the duration of the hospital stay. Comparisons of the data were evaluated by the Wilcoxon and chi-square tests. Among the patients of the LigaSure group, 14 patients were detected to have hyperthyroidism (seven patients with Graves' disease and another seven patients with multinodular toxic goiter), while 36 patients were found to be euthyroidic. The durations of the operation time and of the hospital stay of the patients in the LigaSure group were significantly lower than the conventional thyroidectomy group (p<0.05). The complication rates of the LigaSure and conventional thyroidectomy groups were 4 and 6%, respectively (p>0.05). The use of LigaSure as an operative technique in the treatment of Graves' disease and toxic goiter is a safe and effective modality that provides a shorter hospital stay and a shorter operation time as well.

  10. Fracture toughness study on LIGA fabricated microstructures

    NASA Astrophysics Data System (ADS)

    Oropeza, Catherine; Lian, Kun; Wang, Wanjun

    2003-01-01

    One of the major difficulties faced by MEMS researchers today is the lack of data regarding properties of electroplated metals or alloys at micro-levels as those produced by the LIGA and the LIGA related process. These mechanical properties are not well known and they cannot be extrapolated from macro-scale data without experimental verification. This lack of technical information about physical properties at microscale has affected the consistency and reliability of batch-fabricated components and leads to very low rates of successful fabrication. Therefore, this material issue is of vital importance to the development of LIGA technology and to its industrial applications. The research work reported in this paper focuses on the development of a new capability based on design, fabrication, and testing of groups of UV-LIGA fabricated nickel microspecimens for the evaluation of fracture strength. The devised testing mechanism demonstrated compatibility with the fabricated samples and capability of performing the desired experimentation by generating resistance-to-fracture values of the nickel specimens. The average fracture strength value obtained, expressed with a 95% confidence interval, was 315 +/- 54 Mpa. Further data acquisition, especially involving tensile specimen testing, and material analysis is needed to fully understand the implications of the information obtained.

  11. Cloning and Molecular Characterization of an Immunogenic LigA Protein of Leptospira interrogans

    PubMed Central

    Palaniappan, Raghavan U. M.; Chang, Yung-Fu; Jusuf, S. S. D.; Artiushin, S.; Timoney, John F.; McDonough, Sean P.; Barr, Steve C.; Divers, Thomas J.; Simpson, Kenneth W.; McDonough, Patrick L.; Mohammed, Hussni O.

    2002-01-01

    A clone expressing a novel immunoreactive leptospiral immunoglobulin-like protein A of 130 kDa (LigA) from Leptospira interrogans serovar pomona type kennewicki was isolated by screening a genomic DNA library with serum from a mare that had recently aborted due to leptospiral infection. LigA is encoded by an open reading frame of 3,675 bp, and the deduced amino acid sequence consists of a series of 90-amino-acid tandem repeats. A search of the NCBI database found that homology of the LigA repeat region was limited to an immunoglobulin-like domain of the bacterial intimin binding protein of Escherichia coli, the cell adhesion domain of Clostridium acetobutylicum, and the invasin of Yersinia pestis. Secondary structure prediction analysis indicates that LigA consists mostly of beta sheets with a few alpha-helical regions. No LigA was detectable by immunoblot analysis of lysates of the leptospires grown in vitro at 30°C or when cultures were shifted to 37°C. Strikingly, immunohistochemistry on kidney from leptospira-infected hamsters demonstrated LigA expression. These findings suggest that LigA is specifically induced only in vivo. Sera from horses, which aborted as a result of natural Leptospira infection, strongly recognize LigA. LigA is the first leptospiral protein described to have 12 tandem repeats and is also the first to be expressed only during infection. Thus, LigA may have value in serodiagnosis or as a protective immunogen in novel vaccines. PMID:12379666

  12. Solution structure of leptospiral LigA4 Big domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Song; Zhang, Jiahai; Zhang, Xuecheng

    Pathogenic Leptospiraspecies express immunoglobulin-like proteins which serve as adhesins to bind to the extracellular matrices of host cells. Leptospiral immunoglobulin-like protein A (LigA), a surface exposed protein containing tandem repeats of bacterial immunoglobulin-like (Big) domains, has been proved to be involved in the interaction of pathogenic Leptospira with mammalian host. In this study, the solution structure of the fourth Big domain of LigA (LigA4 Big domain) from Leptospira interrogans was solved by nuclear magnetic resonance (NMR). The structure of LigA4 Big domain displays a similar bacterial immunoglobulin-like fold compared with other Big domains, implying some common structural aspects of Bigmore » domain family. On the other hand, it displays some structural characteristics significantly different from classic Ig-like domain. Furthermore, Stains-all assay and NMR chemical shift perturbation revealed the Ca{sup 2+} binding property of LigA4 Big domain. - Highlights: • Determining the solution structure of a bacterial immunoglobulin-like domain from a surface protein of Leptospira. • The solution structure shows some structural characteristics significantly different from the classic Ig-like domains. • A potential Ca{sup 2+}-binding site was identified by strains-all and NMR chemical shift perturbation.« less

  13. Low-loss LIGA-micromachined conductor-backed coplanar waveguide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forman, Michael A.

    2004-12-01

    A mesoscale low-loss LIGA-micromachined conductor-backed coplanar waveguide is presented. The 517 {micro}m lines are the tallest uniplanar LIGA-fabricated microwave transmission lines to date, as well as the first to be constructed of copper rather than nickel. The conductor-backed micromachined CPW on quartz achieves a measured attenuation of 0.064 dB/cm at 15.5 GHz.

  14. A safety-based comparison of pure LigaSure use and LigaSure-tie technique in total thyroidectomy.

    PubMed

    Pergel, A; Yucel, A Fikret; Aydin, I; Sahin, D A; Aras, S; Kulacoglu, H

    2014-01-01

    Sutureless total thyroidectomy by using vessel sealing devices has been shown to be safe in some recent clinical studies. However, some surgeons are still concerned about the use of these energy devices in the vicinity of there current laryngeal nerve and parathyroid glands. The objective of this study was to investigate the effects of the use of pure LigaSure on postoperative complications and to discuss the pertinent literature. A total of 456 patients having undergone a total thyroidectomy operation between June 2009 and March 2011 were included in the study. Data were prospectively collected and retrospectively evaluated. Patients were separated into 2 groups. Group L comprised of 182 patients where onlyLigaSure was used, and group LT consisted of 274 patients where ligation was used in the vicinity of the recurrent laryngeal nerve and parathyroid glands, and LigaSure was used in all other parts of the surgery. Patient's blood calcium values were checked preoperatively and at postoperative 24, 48, and 72 hours. Groups were assessed in terms of demographic properties, thyroid pathology, duration of operation, and postoperative complications. Groups were similar in respect of demographic properties, operation duration, thyroid gland pathology. No mortality rate was recorded. Laboratory hypocalcemia rate was higher in group L (P 0.003), but no significant difference was identified between groups in terms of symptomatic hypocalcemia.No permanent hypocalcemia or recurrent laryngeal nerve injury developed in any of the patients in the two groups. Pure LigaSure for total thyroidectomy may increase laboratory hypocalcemia rate, but not symptomatic hypocalcemia. Hemorrhage related complications were similar and low in the two groups. Ligations in the places close to delicate anatomic structures did not cause longer operative times and may be a safer option in total thyroidectomy. Celsius.

  15. LigaSure Hemorrhoidectomy for Symptomatic Hemorrhoids: First Pediatric Experience.

    PubMed

    Grossmann, Ole; Soccorso, Giampiero; Murthi, Govind

    2015-08-01

    Hemorrhoids are uncommon in children. Third and fourth degree symptomatic hemorrhoids may be surgically excised. We describe the first experience of using LigaSure (Covidien, Mansfield, Massachusetts, United States) to perform hemorrhoidectomies in children. LigaSure hemorrhoidectomy has been well described in adults and is found to be superior in patient tolerance as compared with conventional hemorrhoidectomy. Georg Thieme Verlag KG Stuttgart · New York.

  16. Recent Developments in Microsystems Fabricated by the Liga-Technique

    NASA Technical Reports Server (NTRS)

    Schulz, J.; Bade, K.; El-Kholi, A.; Hein, H.; Mohr, J.

    1995-01-01

    As an example of microsystems fabricated by the LIGA-technique (x-ray lithography, electroplating and molding), three systems are described and characterized: a triaxial acceleration sensor system, a micro-optical switch, and a microsystem for the analysis of pollutants. The fabrication technologies are reviewed with respect to the key components of the three systems: an acceleration sensor, and electrostatic actuator, and a spectrometer made by the LIGA-technique. Aa micro-pump and micro-valve made by using micromachined tools for molding and optical fiber imaging are made possible by combining LIGA and anisotropic etching of silicon in a batch process. These examples show that the combination of technologies and components is the key to complex microsystems. The design of such microsystems will be facilitated is standardized interfaces are available.

  17. Electronic structure and optical properties of LiGa0.5In0.5Se2 single crystal, a nonlinear optical mid-IR material

    NASA Astrophysics Data System (ADS)

    Lavrentyev, A. A.; Gabrelian, B. V.; Vu, Tuan V.; Isaenko, L. I.; Yelisseyev, A. P.; Khyzhun, O. Y.

    2018-06-01

    Measurements of X-ray photoelectron core-level and valence-band spectra for pristine and irradiated with Ar+ ions surfaces of LiGa0.5In0.5Se2 single crystal, novel nonlinear optical mid-IR selenide grown by a modified vertical Bridgman-Stockbarger technique, are reported. Electronic structure of LiGa0.5In0.5Se2 is elucidated from theoretical and experimental points of view. Notably, total and partial densities of states (DOSs) of the LiGa0.5In0.5Se2 compound are calculated based on density functional theory (DFT) using the augmented plane wave + local orbitals (APW + lo) method. In accordance with the DFT calculations, the principal contributors to the valence band are the Se 4p states, making the main input at the top and in the upper part of the band, while its bottom is dominated by contributions of the valence s states associated with Ga and In atoms. The theoretical total DOS curve peculiarities are found to be in excellent agreement with the shape of the X-ray photoelectron valence-band spectrum of the LiGa0.5In0.5Se2 single crystal. The bottom of the conduction band of LiGa0.5In0.5Se2 is formed mainly by contributions of the unoccupied Ga 4s and In 5s states in almost equal proportion, with somewhat smaller contributions of the unoccupied Se 4p states as well. Our calculations indicate that the LiGa0.5In0.5Se2 compound is a direct gap semiconductor. The principal optical constants of LiGa0.5In0.5Se2 are calculated in the present work.

  18. Immunoprotective properties of recombinant LigA and LigB in a hamster model of acute leptospirosis

    PubMed Central

    Lourdault, Kristel; Matsunaga, James; Haake, David A.

    2017-01-01

    Leptospirosis is the most widespread zoonosis and is considered a major public health problem worldwide. Currently, there is no widely available vaccine against leptospirosis for use in humans. A purified, recombinant subunit vaccine that includes the last six immunoglobulin-like (Ig-like) domains of the leptospiral protein LigA (LigA7’-13) protects against lethal infection but not renal colonization after challenge by Leptospira interrogans. In this study, we examined whether the addition of the first seven Ig-like domains of LigB (LigB0-7) to LigA7’-13, can enhance immune protection and confer sterilizing immunity in the Golden Syrian hamster model of acute leptospirosis. Hamsters were subcutaneously immunized with soluble, recombinant LigA7’-13, LigB0-7, or a combination of LigA7’-13 and LigB0-7 in Freund’s adjuvant. Immunization with Lig proteins generated a strong humoral immune response with high titers of IgG that recognized homologous protein, and cross-reacted with the heterologous protein as assessed by ELISA. LigA7’-13 alone, or in combination with LigB0-7, protected all hamsters from intraperitoneal challenge with a lethal dose of L. interrogans serovar Copenhageni strain Fiocruz L1-130. However, bacteria were recovered from the kidneys of all animals. Of eight animals immunized with LigB0-7, only three survived Leptospira challenge, one of which lacked renal colonization and had antibodies to native LigB by immunoblot. In addition, sera from two of the three LigB0-7 immunized survivors cross-reacted with LigA11-13, a region of LigA that is sufficient for protection. In summary, we confirmed that LigA7’-13 protects hamsters from death but not infection, and immunization with LigB0-7, either alone or in combination with LigA7’-13, did not confer sterilizing immunity. PMID:28704385

  19. Electronic structure of LiGaS 2

    NASA Astrophysics Data System (ADS)

    Atuchin, V. V.; Isaenko, L. I.; Kesler, V. G.; Lobanov, S.; Huang, H.; Lin, Z. S.

    2009-04-01

    X-ray photoelectron spectroscopy (XPS) measurement has been performed to determine the valence band structure of LiGaS 2 crystals. The experimental measurement is compared with the electronic structure obtained from the density functional calculations. It is found that the Ga 3d states in the XPS spectrum are much higher than the calculated results. In order to eliminate this discrepancy, the LDA+ U method is employed and reasonable agreement is achieved. Further calculations show that the difference of the linear and nonlinear optical coefficients between LDA and LDA+ U calculations is negligibly small, indicating that the Ga 3d states are actually independent of the excited properties of LiGaS 2 crystals since they are located at a very deep position in the valence bands.

  20. LIGA-based microsystem manufacturing:the electrochemistry of through-mold depostion and material properties.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, James J.; Goods, Steven Howard

    2005-06-01

    The report presented below is to appear in ''Electrochemistry at the Nanoscale'', Patrik Schmuki, Ed. Springer-Verlag, (ca. 2005). The history of the LIGA process, used for fabricating dimensional precise structures for microsystem applications, is briefly reviewed, as are the basic elements of the technology. The principal focus however, is on the unique aspects of the electrochemistry of LIGA through-mask metal deposition and the generation of the fine and uniform microstructures necessary to ensure proper functionality of LIGA components. We draw from both previously published work by external researchers in the field as well as from published and unpublished studies frommore » within Sandia.« less

  1. Harmonic versus LigaSure hemostasis technique in thyroid surgery: A meta-analysis

    PubMed Central

    Upadhyaya, Arun; Hu, Tianpeng; Meng, Zhaowei; Li, Xue; He, Xianghui; Tian, Weijun; Jia, Qiang; Tan, Jian

    2016-01-01

    Harmonic scalpel and LigaSure vessel sealing systems have been suggested as options for saving surgical time and reducing postoperative complications. The aim of the present meta-analysis was to compare surgical time, postoperative complications and other parameters between them in for the open thyroidectomy procedure. Studies were retrieved from MEDLINE, Cochrane Library, EMBASE and ISI Web of Science until December 2015. All the randomized controlled trials (RCTs) comparing Harmonic scalpel and LigaSure during open thyroidectomy were selected. Following data extraction, statistical analyses were performed. Among the 24 studies that were evaluated for eligibility, 7 RCTs with 981 patients were included. The Harmonic scalpel significantly reduced surgical time compared with LigaSure techniques (8.79 min; 95% confidence interval, −15.91 to −1.67; P=0.02). However, no significant difference was observed for the intraoperative blood loss, postoperative blood loss, duration of hospital stay, thyroid weight and serum calcium level postoperatively in either group. The present meta-analysis indicated superiority of Harmonic Scalpel only in terms of surgical time compared with LigaSure hemostasis techniques in open thyroid surgery. PMID:27446546

  2. UV-LIGA microfabrication process for sub-terahertz waveguides utilizing multiple layered SU-8 photoresist

    NASA Astrophysics Data System (ADS)

    Malekabadi, Ali; Paoloni, Claudio

    2016-09-01

    A microfabrication process based on UV LIGA (German acronym of lithography, electroplating and molding) is proposed for the fabrication of relatively high aspect ratio sub-terahertz (100-1000 GHz) metal waveguides, to be used as a slow wave structure in sub-THz vacuum electron devices. The high accuracy and tight tolerances required to properly support frequencies in the sub-THz range can be only achieved by a stable process with full parameter control. The proposed process, based on SU-8 photoresist, has been developed to satisfy high planar surface requirements for metal sub-THz waveguides. It will be demonstrated that, for a given thickness, it is more effective to stack a number of layers of SU-8 with lower thickness rather than using a single thick layer obtained at lower spin rate. The multiple layer approach provides the planarity and the surface quality required for electroforming of ground planes or assembly surfaces and for assuring low ohmic losses of waveguides. A systematic procedure is provided to calculate soft and post-bake times to produce high homogeneity SU-8 multiple layer coating as a mold for very high quality metal waveguides. A double corrugated waveguide designed for 0.3 THz operating frequency, to be used in vacuum electronic devices, was fabricated as test structure. The proposed process based on UV LIGA will enable low cost production of high accuracy sub-THz 3D waveguides. This is fundamental for producing a new generation of affordable sub-THz vacuum electron devices, to fill the technological gap that still prevents a wide diffusion of numerous applications based on THz radiation.

  3. Temperature Sensitivity Conferred by ligA Alleles from Psychrophilic Bacteria upon Substitution in Mesophilic Bacteria and a Yeast Species

    PubMed Central

    Pankowski, Jarosław A.; Puckett, Stephanie M.

    2016-01-01

    We have assembled a collection of 13 psychrophilic ligA alleles that can serve as genetic elements for engineering mesophiles to a temperature-sensitive (TS) phenotype. When these ligA alleles were substituted into Francisella novicida, they conferred a TS phenotype with restrictive temperatures between 33 and 39°C. When the F. novicida ligA hybrid strains were plated above their restrictive temperatures, eight of them generated temperature-resistant variants. For two alleles, the mutations that led to temperature resistance clustered near the 5′ end of the gene, and the mutations increased the predicted strength of the ribosome binding site at least 3-fold. Four F. novicida ligA hybrid strains generated no temperature-resistant variants at a detectable level. These results suggest that multiple mutations are needed to create temperature-resistant variants of these ligA gene products. One ligA allele was isolated from a Colwellia species that has a maximal growth temperature of 12°C, and this allele supported growth of F. novicida only as a hybrid between the psychrophilic and the F. novicida ligA genes. However, the full psychrophilic gene alone supported the growth of Salmonella enterica, imparting a restrictive temperature of 27°C. We also tested two ligA alleles from two Pseudoalteromonas strains for their ability to support the viability of a Saccharomyces cerevisiae strain that lacked its essential gene, CDC9, encoding an ATP-dependent DNA ligase. In both cases, the psychrophilic bacterial alleles supported yeast viability and their expression generated TS phenotypes. This collection of ligA alleles should be useful in engineering bacteria, and possibly eukaryotic microbes, to predictable TS phenotypes. PMID:26773080

  4. Miniature Inchworm Actuators Fabricated by Use of LIGA

    NASA Technical Reports Server (NTRS)

    Yang, Eui-Hyeok

    2003-01-01

    Miniature inchworm actuators that would have relatively simple designs have been proposed for applications in which there are requirements for displacements of the order of microns or tens of microns and for the ability to hold their positions when electric power is not applied. The proposed actuators would be members of the class of microelectromechanical systems (MEMS), but would be designed and fabricated following an approach that is somewhat unusual for MEMS. Like other MEMS actuators, the proposed inchworm actuators could utilize thermoplastic, bimetallic, shape-memory-alloy, or piezoelectric actuation principles. The figure depicts a piezoelectric inchworm actuator according to the proposal. As in other inchworm actuators, linear motion of an extensible member would be achieved by lengthening and shortening the extensible member in synchronism with alternately clamping and releasing one and then the other end of the member. In this case, the moving member would be the middle one; the member would be piezoelectric and would be shortened by applying a voltage to it. The two outer members would also be piezoelectric; the release of the clamps on the upper or lower end would be achieved by applying a voltage to the electrodes on the upper or lower ends, respectively, of these members. Usually, MEMS actuators cannot be fabricated directly on the side walls of silicon wafers, yet the geometry of this actuator necessitates such fabrication. The solution, according to the proposal, would be to use the microfabrication technique known by the German acronym LIGA - "lithographie, galvanoformung, abformung," which means lithography, electroforming, molding. LIGA involves x-ray lithography of a polymer film followed by selective removal of material to form a three-dimensional pattern from which a mold is made. Among the advantages of LIGA for this purpose are that it is applicable to a broad range of materials, can be used to implement a variety of designs, including

  5. Beam line BL11 for LIGA process at the NewSUBARU

    NASA Astrophysics Data System (ADS)

    Mekaru, Harutaka; Utsumi, Yuichi; Hattori, Tadashi

    2001-07-01

    A beam line BL11 is constructed for exposure Hard X-ray Lithography (HXL) in the LIGA (German acronym for Lithographite Galvanoformung and Abformung) process at the synchrotron radiation (SR) facility NewSUBARU of the Laboratory of Advanced Science and Technology for Industry (LASTI) in Himeji Institute of Technology (HIT). This beam line was designed by the criteria; photon energy range 4-6 keV, a beam spot size on the exposure stage ⩾60×5 mm 2, a density of total irradiated photons ⩾10 11 photons/cm 2. The PMMA sheet etching was successfully demonstrated by using the output beam. We conclude that this beam line performs sufficiently well to study the exposure of HXL in the LIGA process.

  6. An Evaluation of Item Response Theory Classification Accuracy and Consistency Indices

    ERIC Educational Resources Information Center

    Wyse, Adam E.; Hao, Shiqi

    2012-01-01

    This article introduces two new classification consistency indices that can be used when item response theory (IRT) models have been applied. The new indices are shown to be related to Rudner's classification accuracy index and Guo's classification accuracy index. The Rudner- and Guo-based classification accuracy and consistency indices are…

  7. Structure-guided mutational analysis of the nucleotidyltransferase domain of Escherichia coli NAD+-dependent DNA ligase (LigA).

    PubMed

    Zhu, Hui; Shuman, Stewart

    2005-04-01

    NAD+-dependent DNA ligase (LigA) is essential for bacterial growth and a potential target for antimicrobial drug discovery. Here we queried the role of 14 conserved amino acids of Escherichia coli LigA by alanine scanning and thereby identified five new residues within the nucleotidyltransferase domain as being essential for LigA function in vitro and in vivo. Structure activity relationships were determined by conservative mutagenesis for the Glu-173, Arg-200, Arg-208, and Arg-277 side chains, as well as four other essential side chains that had been identified previously (Lys-115, Asp-117, Asp-285, and Lys-314). In addition, we identified Lys-290 as important for LigA activity. Reference to the structure of Enterococcus faecalis LigA allowed us to discriminate three classes of essential/important side chains that: (i) contact NAD+ directly (Lys-115, Glu-173, Lys-290, and Lys-314); (ii) comprise the interface between the NMN-binding domain (domain Ia) and the nucleotidyltransferase domain or comprise part of a nick-binding site on the surface of the nucleotidyltransferase domain (Arg-200 and Arg-208); or (iii) stabilize the active site fold of the nucleotidyltransferase domain (Arg-277). Analysis of mutational effects on the isolated ligase adenylylation and phosphodiester formation reactions revealed different functions for essential side chains at different steps of the DNA ligase pathway, consistent with the proposal that the active site is serially remodeled as the reaction proceeds.

  8. Structure-guided Mutational Analysis of the Nucleotidyltransferase Domain of Escherichia coli DNA Ligase (LigA).

    PubMed

    Wang, Li Kai; Zhu, Hui; Shuman, Stewart

    2009-03-27

    NAD(+)-dependent DNA ligases (LigA) are ubiquitous in bacteria, where they are essential for growth and present attractive targets for antimicrobial drug discovery. LigA has a distinctive modular structure in which a nucleotidyltransferase catalytic domain is flanked by an upstream NMN-binding module and by downstream OB-fold, zinc finger, helix-hairpin-helix, and BRCT domains. Here we conducted a structure-function analysis of the nucleotidyltransferase domain of Escherichia coli LigA, guided by the crystal structure of the LigA-DNA-adenylate intermediate. We tested the effects of 29 alanine and conservative mutations at 15 amino acids on ligase activity in vitro and in vivo. We thereby identified essential functional groups that coordinate the reactive phosphates (Arg(136)), contact the AMP adenine (Lys(290)), engage the phosphodiester backbone flanking the nick (Arg(218), Arg(308), Arg(97) plus Arg(101)), or stabilize the active domain fold (Arg(171)). Finer analysis of the mutational effects revealed step-specific functions for Arg(136), which is essential for the reaction of LigA with NAD(+) to form the covalent ligase-AMP intermediate (step 1) and for the transfer of AMP to the nick 5'-PO(4) to form the DNA-adenylate intermediate (step 2) but is dispensable for phosphodiester formation at a preadenylylated nick (step 3).

  9. Multi-dimensional multi-species modeling of transient electrodeposition in LIGA microfabrication.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Gregory Herbert; Chen, Ken Shuang

    2004-06-01

    This report documents the efforts and accomplishments of the LIGA electrodeposition modeling project which was headed by the ASCI Materials and Physics Modeling Program. A multi-dimensional framework based on GOMA was developed for modeling time-dependent diffusion and migration of multiple charged species in a dilute electrolyte solution with reduction electro-chemical reactions on moving deposition surfaces. By combining the species mass conservation equations with the electroneutrality constraint, a Poisson equation that explicitly describes the electrolyte potential was derived. The set of coupled, nonlinear equations governing species transport, electric potential, velocity, hydrodynamic pressure, and mesh motion were solved in GOMA, using themore » finite-element method and a fully-coupled implicit solution scheme via Newton's method. By treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and by repeatedly performing re-meshing with CUBIT and re-mapping with MAPVAR, the moving deposition surfaces were tracked explicitly from start of deposition until the trenches were filled with metal, thus enabling the computation of local current densities that potentially influence the microstructure and frictional/mechanical properties of the deposit. The multi-dimensional, multi-species, transient computational framework was demonstrated in case studies of two-dimensional nickel electrodeposition in single and multiple trenches, without and with bath stirring or forced flow. Effects of buoyancy-induced convection on deposition were also investigated. To further illustrate its utility, the framework was employed to simulate deposition in microscreen-based LIGA molds. Lastly, future needs for modeling LIGA electrodeposition are discussed.« less

  10. Miniature Scroll Pumps Fabricated by LIGA

    NASA Technical Reports Server (NTRS)

    Wiberg, Dean; Shcheglov, Kirill; White, Victor; Bae, Sam

    2009-01-01

    Miniature scroll pumps have been proposed as roughing pumps (low - vacuum pumps) for miniature scientific instruments (e.g., portable mass spectrometers and gas analyzers) that depend on vacuum. The larger scroll pumps used as roughing pumps in some older vacuum systems are fabricated by conventional machining. Typically, such an older scroll pump includes (1) an electric motor with an eccentric shaft to generate orbital motion of a scroll and (2) conventional bearings to restrict the orbital motion to a circle. The proposed miniature scroll pumps would differ from the prior, larger ones in both design and fabrication. A miniature scroll pump would include two scrolls: one mounted on a stationary baseplate and one on a flexure stage (see figure). An electromagnetic actuator in the form of two pairs of voice coils in a push-pull configuration would make the flexure stage move in the desired circular orbit. The capacitance between the scrolls would be monitored to provide position (gap) feedback to a control system that would adjust the drive signals applied to the voice coils to maintain the circular orbit as needed for precise sealing of the scrolls. To minimize power consumption and maximize precision of control, the flexure stage would be driven at the frequency of its mechanical resonance. The miniaturization of these pumps would entail both operational and manufacturing tolerances of <1 m. Such tight tolerances cannot be achieved easily by conventional machining of high-aspect-ratio structures like those of scroll-pump components. In addition, the vibrations of conventional motors and ball bearings exceed these tight tolerances by an order of magnitude. Therefore, the proposed pumps would be fabricated by the microfabrication method known by the German acronym LIGA ( lithographie, galvanoformung, abformung, which means lithography, electroforming, molding) because LIGA has been shown to be capable of providing the required tolerances at large aspect ratios.

  11. Liga developer apparatus system

    DOEpatents

    Boehme, Dale R.; Bankert, Michelle A.; Christenson, Todd R.

    2003-01-01

    A system to fabricate precise, high aspect ratio polymeric molds by photolithograpic process is described. The molds for producing micro-scale parts from engineering materials by the LIGA process. The invention is a developer system for developing a PMMA photoresist having exposed patterns comprising features having both very small sizes, and very high aspect ratios. The developer system of the present invention comprises a developer tank, an intermediate rinse tank and a final rinse tank, each tank having a source of high frequency sonic agitation, temperature control, and continuous filtration. It has been found that by moving a patterned wafer, through a specific sequence of developer/rinse solutions, where an intermediate rinse solution completes development of those portions of the exposed resist left undeveloped after the development solution, by agitating the solutions with a source of high frequency sonic vibration, and by adjusting and closely controlling the temperatures and continuously filtering and recirculating these solutions, it is possible to maintain the kinetic dissolution of the exposed PMMA polymer as the rate limiting step.

  12. Investigations of the optical and EPR data and local structure for the trigonal tetrahedral Co2+ centers in LiGa5O8: Co2+ crystal

    NASA Astrophysics Data System (ADS)

    He, Jian; Liao, Bi-Tao; Mei, Yang; Liu, Hong-Gang; Zheng, Wen-Chen

    2018-01-01

    In this paper, we calculate uniformly the optical and EPR data for Co2+ ion at the trigonal tetrahedral Ga3+ site in LiGa5O8 crystal from the complete diagonalization (of energy matrix) method founded on the two-spin-orbit-parameter model, where the contributions to the spectroscopic data from both the spin-orbit parameter of dn ion (in the classical crystal field theory) and that of ligand ions are contained. The calculated ten spectroscopic data (seven optical bands and three spin-Hamiltonian parameters g//, g⊥ and D) with only four adjustable parameters are in good agreement with the available observed values. Compared with the host (GaO4)5- cluster, the great angular distortion and hence the great trigonal distortion of (CoO4)6- impurity center obtained from the calculations are referred to the large charge and size mismatch substitution. This explains reasonably the observed great g-anisotropy Δg (= g// - g⊥) and zero-field splitting D for the (CoO4)6- cluster in LiGa5O8: Co2+ crystal.

  13. Osmotic regulation of expression of two extracellular matrix-binding proteins and a haemolysin of Leptospira interrogans: differential effects on LigA and Sph2 extracellular release.

    PubMed

    Matsunaga, James; Medeiros, Marco A; Sanchez, Yolanda; Werneid, Kristian F; Ko, Albert I

    2007-10-01

    The life cycle of the pathogen Leptospira interrogans involves stages outside and inside the host. Entry of L. interrogans from moist environments into the host is likely to be accompanied by the induction of genes encoding virulence determinants and the concomitant repression of genes encoding products required for survival outside of the host. The expression of the adhesin LigA, the haemolysin Sph2 (Lk73.5) and the outer-membrane lipoprotein LipL36 of pathogenic Leptospira species have been reported to be regulated by mammalian host signals. A previous study demonstrated that raising the osmolarity of the leptospiral growth medium to physiological levels encountered in the host by addition of various salts enhanced the levels of cell-associated LigA and LigB and extracellular LigA. In this study, we systematically examined the effects of osmotic upshift with ionic and non-ionic solutes on expression of the known mammalian host-regulated leptospiral genes. The levels of cell-associated LigA, LigB and Sph2 increased at physiological osmolarity, whereas LipL36 levels decreased, corresponding to changes in specific transcript levels. These changes in expression occurred irrespective of whether sodium chloride or sucrose was used as the solute. The increase of cellular LigA, LigB and Sph2 protein levels occurred within hours of adding sodium chloride. Extracellular Sph2 levels increased when either sodium chloride or sucrose was added to achieve physiological osmolarity. In contrast, enhanced levels of extracellular LigA were observed only with an increase in ionic strength. These results indicate that the mechanisms for release of LigA and Sph2 differ during host infection. Thus, osmolarity not only affects leptospiral gene expression by affecting transcript levels of putative virulence determinants but also affects the release of such proteins into the surroundings.

  14. Cantilevered multilevel LIGA devices and methods

    DOEpatents

    Morales, Alfredo Martin; Domeier, Linda A.

    2002-01-01

    In the formation of multilevel LIGA microstructures, a preformed sheet of photoresist material, such as polymethylmethacrylate (PMMA) is patterned by exposure through a mask to radiation, such as X-rays, and developed using a developer to remove the exposed photoresist material. A first microstructure is then formed by electroplating metal into the areas from which the photoresist has been removed. Additional levels of microstructure are added to the initial microstructure by covering the first microstructure with a conductive polymer, machining the conductive polymer layer to reveal the surface of the first microstructure, sealing the conductive polymer and surface of the first microstructure with a metal layer, and then forming the second level of structure on top of the first level structure. In such a manner, multiple layers of microstructure can be built up to allow complex cantilevered microstructures to be formed.

  15. Photostimulated near-infrared persistent luminescence as a new optical read-out from Cr3+-doped LiGa5O8

    PubMed Central

    Liu, Feng; Yan, Wuzhao; Chuang, Yen-Jun; Zhen, Zipeng; Xie, Jin; Pan, Zhengwei

    2013-01-01

    In conventional photostimulable storage phosphors, the optical information written by x-ray or ultraviolet irradiation is usually read out as a visible photostimulated luminescence (PSL) signal under the stimulation of a low-energy light with appropriate wavelength. Unlike the transient PSL, here we report a new optical read-out form, photostimulated persistent luminescence (PSPL) in the near-infrared (NIR), from a Cr3+-doped LiGa5O8 NIR persistent phosphor exhibiting a super-long NIR persistent luminescence of more than 1,000 h. An intense PSPL signal peaking at 716 nm can be repeatedly obtained in a period of more than 1,000 h when an ultraviolet-light (250–360 nm) pre-irradiated LiGa5O8:Cr3+ phosphor is repeatedly stimulated with a visible light or a NIR light. The LiGa5O8:Cr3+ phosphor has promising applications in optical information storage, night-vision surveillance, and in vivo bio-imaging. PMID:23532003

  16. Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory

    ERIC Educational Resources Information Center

    Lee, Won-Chan

    2010-01-01

    In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…

  17. Pressure induced solid-solid reconstructive phase transition in LiGa O2 dominated by elastic strain

    NASA Astrophysics Data System (ADS)

    Hu, Qiwei; Yan, Xiaozhi; Lei, Li; Wang, Qiming; Feng, Leihao; Qi, Lei; Zhang, Leilei; Peng, Fang; Ohfuji, Hiroaki; He, Duanwei

    2018-01-01

    Pressure induced solid-solid reconstructive phase transitions for graphite-diamond, and wurtzite-rocksalt in GaN and AlN occur at significantly higher pressure than expected from equilibrium coexistence and their transition paths are always inconsistent with each other. These indicate that the underlying nucleation and growth mechanism in the solid-solid reconstructive phase transitions are poorly understood. Here, we propose an elastic-strain dominated mechanism in a reconstructive phase transition, β -LiGa O2 to γ -LiGa O2 , based on in situ high-pressure angle dispersive x-ray diffraction and single-crystal Raman scattering. This mechanism suggests that the pressure induced solid-solid reconstructive phase transition is neither purely diffusionless nor purely diffusive, as conventionally assumed, but a combination. The large elastic strains are accumulated, with the coherent nucleation, in the early stage of the transition. The elastic strains along the 〈100 〉 and 〈001 〉 directions are too large to be relaxed by the shear stress, so an intermediate structure emerges reducing the elastic strains and making the transition energetically favorable. At higher pressures, when the elastic strains become small enough to be relaxed, the phase transition to γ -LiGa O2 begins and the coherent nucleation is substituted with a semicoherent one with Li and Ga atoms disordered.

  18. A Transdermal Drug Delivery System Based on LIGA Technology and Soft Lithography

    NASA Astrophysics Data System (ADS)

    Matteucci, Marco; Perennes, Frederic; Marmiroli, Benedetta; Di Fabrizio, Enzo

    2007-01-01

    This report presents a transdermal drug delivery system based on LIGA fabricated microparts. It is a portable device combining a magnetically actuated micro gear pump with a microneedle array. The fluidic behaviour of the system is analyzed in order to predict its performance according to the dimension of the microparts and then compared to experimental data. The manufacturing process of both micropump and microneedle array are described.

  19. Use of LigaSure™ on bile duct in rats: an experimental study.

    PubMed

    Marte, Antonio; Pintozzi, Lucia

    2017-08-01

    The closure of a cystic duct during cholecystectomy by means of radiofrequency is still controversial. We report our preliminary experimental results with the use of LigaSure™ on common bile duct in rats. Thirty Wistar rats weighing 70 to 120 g were employed for this study. The animals were all anesthetized with intraperitoneal ketamine and then divided into three groups. The first group (10 rats, Group C) underwent only laparotomy and isolation of the common bile duct. The second (10 rats, Group M) underwent laparotomy and closure of the common bile duct (CBD) with monopolar coagulation. The third group (10 rats, Group L) underwent laparotomy and sealing of the common bile duct with two application of LigaSureTM. Afterwards, all rats were kept in comfortable cages and were administered dibenzamine for five days. They were all sacrificed on day 20. Through a laparotomy, the liver and bile duct were removed for histological examination. Blood samples were obtained to dose bilirubin, amylase and transaminase levels. Mortality rate was 0 in the control group (C), 3/10 rats in group M and 0 in group L. In group L, the macroscopic examination showed a large choledochocele (3-3.5 × 1.5 cm) with few adhesions. At the histological examination there was optimal sealing of the common bile duct in 9/10 rats. In group M, 2/10 rats had liver abscesses, 3/10 rats had choledochocele and 5/10 rats, biliary peritonitis. There was intense tissue inflammation and the dissection was difficult. Analyses of blood samples showed an increase in total bilirubin, aspartate aminotransferase (AST) and alanine aminotransferase (ALT) in groups M and L. The preliminary results of our study confirm that radiofrequency can be safely used for the closure of the common bile duct. The choledochocele obtained with this technique could represent a good experimental model. These results could be a further step for using the LigaSureTM in clipless cholecystectomy.

  20. Acquisition of negative complement regulators by the saprophyte Leptospira biflexa expressing LigA or LigB confers enhanced survival in human serum.

    PubMed

    Castiblanco-Valencia, Mónica M; Fraga, Tatiana R; Breda, Leandro C D; Vasconcellos, Sílvio A; Figueira, Cláudio P; Picardeau, Mathieu; Wunder, Elsio; Ko, Albert I; Barbosa, Angela S; Isaac, Lourdes

    2016-05-01

    Leptospiral immunoglobulin-like (Lig) proteins are surface exposed molecules present in pathogenic but not in saprophytic Leptospira species. We have previously shown that Lig proteins interact with the soluble complement regulators Factor H (FH), FH like-1 (FHL-1), FH related-1 (FHR-1) and C4b Binding Protein (C4BP). In this study, we used the saprophyte L. biflexa serovar Patoc as a surrogate host to address the specific role of LigA and LigB proteins in leptospiral complement evasion. L. biflexa expressing LigA or LigB was able to acquire FH and C4BP. Bound complement regulators retained their cofactor activities of FI in the proteolytic cleavage of C3b and C4b. Moreover, heterologous expression of ligA and ligB genes in the saprophyte L. biflexa enhanced bacterial survival in human serum. Complement deposition on lig-transformed L. biflexa was assessed by flow cytometry analysis. With regard to MAC deposition, L. biflexa expressing LigA or LigB presented an intermediate profile: MAC deposition levels were greater than those found in the pathogenic L. interrogans, but lower than those observed for L. biflexa wildtype. In conclusion, Lig proteins contribute to in vitro control of complement activation on the leptospiral surface, promoting an increased bacterial survival in human serum. Copyright © 2016 European Federation of Immunological Societies. All rights reserved.

  1. Acquisition of negative complement regulators by the saprophyte Leptospira biflexa expressing LigA or LigB confers enhanced survival in human serum

    PubMed Central

    Castiblanco-Valencia, Mónica M.; Fraga, Tatiana R.; Breda, Leandro C.D.; Vasconcellos, Sílvio A.; Figueira, Cláudio P.; Picardeau, Mathieu; Wunder, Elsio; Ko, Albert I.; Barbosa, Angela S.; Isaac, Lourdes

    2017-01-01

    Leptospiral immunoglobulin-like (Lig) proteins are surface exposed molecules present in pathogenic but not in saprophytic Leptospira species. We have previously shown that Lig proteins interact with the soluble complement regulators Factor H (FH), FH like-1 (FHL-1), FH related-1 (FHR-1) and C4b Binding Protein (C4BP). In this study, we used the saprophyte L. biflexa serovar Patoc as a surrogate host to address the specific role of LigA and LigB proteins in leptospiral complement evasion. L. biflexa expressing LigA or LigB was able to acquire FH and C4BP. Bound complement regulators retained their cofactor activities of FI in the proteolytic cleavage of C3b and C4b. Moreover, heterologous expression of ligA and ligB genes in the saprophyte L. biflexa enhanced bacterial survival in human serum. Complement deposition on lig-transformed L. biflexa was assessed by flow cytometry analysis. With regard to MAC deposition, L. biflexa expressing LigA or LigB presented an intermediate profile: MAC deposition levels were greater than those found in the pathogenic L. interrogans, but lower than those observed for L. biflexa wildtype. In conclusion, Lig proteins contribute to in vitro control of complement activation on the leptospiral surface, promoting an increased bacterial survival in human serum. PMID:26976804

  2. Design of electrostatically levitated micromachined rotational gyroscope based on UV-LIGA technology

    NASA Astrophysics Data System (ADS)

    Cui, Feng; Chen, Wenyuan; Su, Yufeng; Zhang, Weiping; Zhao, Xiaolin

    2004-12-01

    The prevailing micromachined vibratory gyroscope typically has a proof mass connected to the substrate by a mechanical suspension system, which makes it face a tough challenge to achieve tactical or inertial grade performance levels. With a levitated rotor as the proof mass, a micromachined rotational gyroscope will potentially have higher performance than vibratory gyroscope. Besides working as a moment rebalance dual-axis gyroscope, the micromachined rotational gyroscope based on a levitated rotor can simultaneously work as a force balance tri-axis accelerometer. Micromachined rotational gyroscope based on an electrostatically levitated silicon micromachined rotor has been notably developed. In this paper, factors in designing a rotational gyro/accelerometer based on an electrostatically levitated disc-like rotor, including gyroscopic action of micro rotor, methods of stable levitation, micro displacement detection and control, rotation drive and speed control, vacuum packaging and microfabrication, are comprehensively considered. Hence a design of rotational gyro/accelerometer with an electroforming nickel rotor employing low cost UV-LIGA technology is presented. In this design, a wheel-like flat rotor is proposed and its basic dimensions, diameter and thickness, are estimated according to the required loading capability. Finally, its micromachining methods based on UV-LIGA technology and assembly technology are discussed.

  3. UV-LIGA technique for ECF micropumps using back UV exposure and self-alignment

    NASA Astrophysics Data System (ADS)

    Han, D.; Xia, Y.; Yokota, S.; Kim, J. W.

    2017-12-01

    This paper proposes and develops a novel UV-LIGA technique using back UV exposure and self-alignment to realize high aspect ratio micromachining (HARM) in high power density electro-conjugate fluid (ECF) micropumps. ECF is a functional fluid designed to be able to generate strong and active jet flow (ECF jetting) between anode and cathode in ECF when high DC voltage is applied. We have developed high power density ECF micropumps consisting of triangular prism and slit electrode pairs (TPSEs) fabricated by HARM. The traditional UV-LIGA technique for HARM is mainly divided into two approaches: (a) single thick layer and (b) multiple thin layers. Both methods have limitations—deformed molds in the former and misalignment between layers in the latter. Using the finite element method software COMSOL Multiphysics, we demonstrate that the deformed micro-molds critically impair the performance of ECF micropumps. In addition, we experimentally prove that the misalignment would easily trigger electric discharge in the ECF micropumps. To overcome these limitations, we conceive a new concept utilizing the seed electrode layer for electroforming as the UV shield and pattern photoresist (KMPR) by back UV exposure. The seed electrode layer should be composed of a non-transparent conductor (Au/Ti) for patterning and a transparent conductor (ITO) for wiring. Instead of ITO, we propose the concept of transparency-like electrodes comprised of thin metal line patterns. To verify this concept, KMPR layers with thicknesses of 70, 220, and 500 µm are experimentally investigated. In the case of 500 µm KMPR thickness, the concept of transparency-like electrode was partially proved. As a result, TPSEs with a height of 440 µm were successfully fabricated. Characteristic experiments demonstrated that ECF micropumps (367 mW cm-3) fabricated by back UV achieved almost the same output power density as ECF micropumps (391 mW cm-3) fabricated by front UV. This paper proves that the proposed

  4. Enhanced adhesion for LIGA microfabrication by using a buffer layer

    DOEpatents

    Bajikar, Sateesh S.; De Carlo, Francesco; Song, Joshua J.

    2004-01-27

    The present invention is an improvement on the LIGA microfabrication process wherein a buffer layer is applied to the upper or working surface of a substrate prior to the placement of a resist onto the surface of the substrate. The buffer layer is made from an inert low-Z material (low atomic weight), a material that absorbs secondary X-rays emissions from the substrate that are generated from the substrate upon exposure to a primary X-rays source. Suitable materials for the buffer layer include polyamides and polyimide. The preferred polyimide is synthesized form pyromellitic anhydride and oxydianiline (PMDA-ODA).

  5. Enhanced adhesion for LIGA microfabrication by using a buffer layer

    DOEpatents

    Bajikar, Sateesh S.; De Carlo, Francesco; Song, Joshua J.

    2001-01-01

    The present invention is an improvement on the LIGA microfabrication process wherein a buffer layer is applied to the upper or working surface of a substrate prior to the placement of a resist onto the surface of the substrate. The buffer layer is made from an inert low-Z material (low atomic weight), a material that absorbs secondary X-rays emissions from the substrate that are generated from the substrate upon exposure to a primary X-rays source. Suitable materials for the buffer layer include polyamides and polyimide. The preferred polyimide is synthesized form pyromellitic anhydride and oxydianiline (PMDA-ODA).

  6. The terminal portion of leptospiral immunoglobulin-like protein LigA confers protective immunity against lethal infection in the hamster model of leptospirosis.

    PubMed

    Silva, Everton F; Medeiros, Marco A; McBride, Alan J A; Matsunaga, Jim; Esteves, Gabriela S; Ramos, João G R; Santos, Cleiton S; Croda, Júlio; Homma, Akira; Dellagostin, Odir A; Haake, David A; Reis, Mitermayer G; Ko, Albert I

    2007-08-14

    Subunit vaccines are a potential intervention strategy against leptospirosis, which is a major public health problem in developing countries and a veterinary disease in livestock and companion animals worldwide. Leptospiral immunoglobulin-like (Lig) proteins are a family of surface-exposed determinants that have Ig-like repeat domains found in virulence factors such as intimin and invasin. We expressed fragments of the repeat domain regions of LigA and LigB from Leptospira interrogans serovar Copenhageni. Immunization of Golden Syrian hamsters with Lig fragments in Freund's adjuvant induced robust antibody responses against recombinant protein and native protein, as detected by ELISA and immunoblot, respectively. A single fragment, LigANI, which corresponds to the six carboxy-terminal Ig-like repeat domains of the LigA molecule, conferred immunoprotection against mortality (67-100%, P<0.05) in hamsters which received a lethal inoculum of L. interrogans serovar Copenhageni. However, immunization with this fragment did not confer sterilizing immunity. These findings indicate that the carboxy-terminal portion of LigA is an immunoprotective domain and may serve as a vaccine candidate for human and veterinary leptospirosis.

  7. Classical Spin Nematic Transition in LiGa0.95In0.05Cr4O8

    NASA Astrophysics Data System (ADS)

    Wawrzyńczak, R.; Tanaka, Y.; Yoshida, M.; Okamoto, Y.; Manuel, P.; Casati, N.; Hiroi, Z.; Takigawa, M.; Nilsen, G. J.

    2017-08-01

    We present the results of a combined 7Li -NMR and diffraction study on LiGa0.95In0.05Cr4O8, a member of the LiGa1 -xInxCr4O8 "breathing" pyrochlore family. Via specific heat and NMR measurements, we find that the complex sequence of first-order transitions observed for LiGaCr4O8 is replaced by a single second-order transition at Tf=11 K . Neutron and x-ray diffraction rule out both structural symmetry lowering and magnetic long-range order as the origin of this transition. Instead, reverse Monte Carlo fitting of the magnetic diffuse scattering indicates that the low-temperature phase may be described as a collinear spin nematic state, characterized by a quadrupolar order parameter. This state also shows signs of short-range order between collinear spin arrangements on tetrahedra, revealed by mapping the reverse Monte Carlo spin configurations onto a three-state color model.

  8. Upconversion of the mid-IR pulses to the near-IR in LiGaS2

    NASA Astrophysics Data System (ADS)

    Kato, Kiyoshi; Umemura, Nobuhiro; Okamoto, Takuya; Petrov, Valentin

    2018-02-01

    This paper reports on the phase-matching properties of LiGaS2 for upconverting a Nd:YAG laser-pumped KTP and AgGaS2 optical parametric oscillator (OPO) at mid-IR to the near-IR by mixing with its pump source together with the new Sellmeier equations that provide a good reproduction of the present experimental results as well as the published data points of second-harmonic generation (SHG) and sum-frequency generation (SFG) of a CO2 laser, a Ti:Al2O3 laser-pumped optical parametric amplifier (OPA), and a Nd:YAG laser-pumped OPO in the mid-IR. This index formula gives the important information that group velocity mismatch (GVM) (Δsp = 1/υs - 1/υp) of LiGaS2 in the 4 - 11 μm range is 12 27 fs/mm lower than that of the widely used LiInS2, which makes it ideal for the upconversion of the mid-IR femtosecond pulses having large spectral bandwidths to the near-IR.

  9. Two Approaches to Estimation of Classification Accuracy Rate under Item Response Theory

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2013-01-01

    Within the framework of item response theory (IRT), there are two recent lines of work on the estimation of classification accuracy (CA) rate. One approach estimates CA when decisions are made based on total sum scores, the other based on latent trait estimates. The former is referred to as the Lee approach, and the latter, the Rudner approach,…

  10. Heterologous expression of pathogen-specific genes ligA and ligB in the saprophyte Leptospira biflexa confers enhanced adhesion to cultured cells and fibronectin.

    PubMed

    Figueira, Cláudio Pereira; Croda, Julio; Choy, Henry A; Haake, David A; Reis, Mitermayer G; Ko, Albert I; Picardeau, Mathieu

    2011-06-09

    In comparison to other bacterial pathogens, our knowledge of the molecular basis of the pathogenesis of leptospirosis is extremely limited. An improved understanding of leptospiral pathogenetic mechanisms requires reliable tools for functional genetic analysis. Leptospiral immunoglobulin-like (Lig) proteins are surface proteins found in pathogenic Leptospira, but not in saprophytes. Here, we describe a system for heterologous expression of the Leptospira interrogans genes ligA and ligB in the saprophyte Leptospira biflexa serovar Patoc. The genes encoding LigA and LigB under the control of a constitutive spirochaetal promoter were inserted into the L. biflexa replicative plasmid. We were able to demonstrate expression and surface localization of LigA and LigB in L. biflexa. We found that the expression of the lig genes significantly enhanced the ability of transformed L. biflexa to adhere in vitro to extracellular matrix components and cultured cells, suggesting the involvement of Lig proteins in cell adhesion. This work reports a complete description of the system we have developed for heterologous expression of pathogen-specific proteins in the saprophytic L. biflexa. We show that expression of LigA and LigB proteins from the pathogen confers a virulence-associated phenotype on L. biflexa, namely adhesion to eukaryotic cells and fibronectin in vitro. This study indicates that L. biflexa can serve as a surrogate host to characterize the role of key virulence factors of the causative agent of leptospirosis.

  11. Heterologous expression of pathogen-specific genes ligA and ligB in the saprophyte Leptospira biflexa confers enhanced adhesion to cultured cells and fibronectin

    PubMed Central

    2011-01-01

    Background In comparison to other bacterial pathogens, our knowledge of the molecular basis of the pathogenesis of leptospirosis is extremely limited. An improved understanding of leptospiral pathogenetic mechanisms requires reliable tools for functional genetic analysis. Leptospiral immunoglobulin-like (Lig) proteins are surface proteins found in pathogenic Leptospira, but not in saprophytes. Here, we describe a system for heterologous expression of the Leptospira interrogans genes ligA and ligB in the saprophyte Leptospira biflexa serovar Patoc. Results The genes encoding LigA and LigB under the control of a constitutive spirochaetal promoter were inserted into the L. biflexa replicative plasmid. We were able to demonstrate expression and surface localization of LigA and LigB in L. biflexa. We found that the expression of the lig genes significantly enhanced the ability of transformed L. biflexa to adhere in vitro to extracellular matrix components and cultured cells, suggesting the involvement of Lig proteins in cell adhesion. Conclusions This work reports a complete description of the system we have developed for heterologous expression of pathogen-specific proteins in the saprophytic L. biflexa. We show that expression of LigA and LigB proteins from the pathogen confers a virulence-associated phenotype on L. biflexa, namely adhesion to eukaryotic cells and fibronectin in vitro. This study indicates that L. biflexa can serve as a surrogate host to characterize the role of key virulence factors of the causative agent of leptospirosis. PMID:21658265

  12. The terminal portion of leptospiral immunoglobulin-like protein LigA confers protective immunity against lethal infection in the hamster model of leptospirosis

    PubMed Central

    Silva, Éverton F.; Medeiros, Marco A.; McBride, Alan J. A.; Matsunaga, Jim; Esteves, Gabriela S.; Ramos, João G. R.; Santos, Cleiton S.; Croda, Júlio; Homma, Akira; Dellagostin, Odir A.; Haake, David A.; Reis, Mitermayer G.; Ko, Albert I.

    2007-01-01

    Subunit vaccines are a potential intervention strategy against leptospirosis, which is a major public health problem in developing countries and a veterinary disease in livestock and companion animals worldwide. Leptospiral immunoglobulin-like (Lig) proteins are a family of surface-exposed determinants that have Ig-like repeat domains found in virulence factors such as intimin and invasin. We expressed fragments of the repeat domain regions of LigA and LigB from Leptospira interrogans serovar Copenhageni. Immunization of Golden Syrian hamsters with Lig fragments in Freund’s adjuvant induced robust antibody responses against recombinant protein and native protein, as detected by ELISA and immunoblot, respectively. A single fragment, LigANI, which corresponds to the six carboxy-terminal Ig-like repeat domains of the LigA molecule, conferred immunoprotection against mortality (67-100%, P <0.05) in hamsters which received a lethal inoculum of L. interrogans serovar Copenhageni. However, immunization with this fragment did not confer sterilizing immunity. These findings indicate that the carboxy-terminal portion of LigA is an immunoprotective domain and may serve as a vaccine candidate for human and veterinary leptospirosis. PMID:17629368

  13. Control of Gene Expression in Leptospira spp. by Transcription Activator-Like Effectors Demonstrates a Potential Role for LigA and LigB in Leptospira interrogans Virulence

    PubMed Central

    Pappas, Christopher J.

    2015-01-01

    Leptospirosis is a zoonotic disease that affects ∼1 million people annually, with a mortality rate of >10%. Currently, there is an absence of effective genetic manipulation tools for targeted mutagenesis in pathogenic leptospires. Transcription activator-like effectors (TALEs) are a recently described group of repressors that modify transcriptional activity in prokaryotic and eukaryotic cells by directly binding to a targeted sequence within the host genome. To determine the applicability of TALEs within Leptospira spp., two TALE constructs were designed. First, a constitutively expressed TALE gene specific for the lacO-like region upstream of bgaL was trans inserted in the saprophyte Leptospira biflexa (the TALEβgal strain). Reverse transcriptase PCR (RT-PCR) analysis and enzymatic assays demonstrated that BgaL was not expressed in the TALEβgal strain. Second, to study the role of LigA and LigB in pathogenesis, a constitutively expressed TALE gene with specificity for the homologous promoter regions of ligA and ligB was cis inserted into the pathogen Leptospira interrogans (TALElig). LigA and LigB expression was studied by using three independent clones: TALElig1, TALElig2, and TALElig3. Immunoblot analysis of osmotically induced TALElig clones demonstrated 2- to 9-fold reductions in the expression levels of LigA and LigB, with the highest reductions being noted for TALElig1 and TALElig2, which were avirulent in vivo and nonrecoverable from animal tissues. This study reconfirms galactosidase activity in the saprophyte and suggests a role for LigA and LigB in pathogenesis. Collectively, this study demonstrates that TALEs are effective at reducing the expression of targeted genes within saprophytic and pathogenic strains of Leptospira spp., providing an additional genetic manipulation tool for this genus. PMID:26341206

  14. Control of Gene Expression in Leptospira spp. by Transcription Activator-Like Effectors Demonstrates a Potential Role for LigA and LigB in Leptospira interrogans Virulence.

    PubMed

    Pappas, Christopher J; Picardeau, Mathieu

    2015-11-01

    Leptospirosis is a zoonotic disease that affects ∼1 million people annually, with a mortality rate of >10%. Currently, there is an absence of effective genetic manipulation tools for targeted mutagenesis in pathogenic leptospires. Transcription activator-like effectors (TALEs) are a recently described group of repressors that modify transcriptional activity in prokaryotic and eukaryotic cells by directly binding to a targeted sequence within the host genome. To determine the applicability of TALEs within Leptospira spp., two TALE constructs were designed. First, a constitutively expressed TALE gene specific for the lacO-like region upstream of bgaL was trans inserted in the saprophyte Leptospira biflexa (the TALEβgal strain). Reverse transcriptase PCR (RT-PCR) analysis and enzymatic assays demonstrated that BgaL was not expressed in the TALEβgal strain. Second, to study the role of LigA and LigB in pathogenesis, a constitutively expressed TALE gene with specificity for the homologous promoter regions of ligA and ligB was cis inserted into the pathogen Leptospira interrogans (TALElig). LigA and LigB expression was studied by using three independent clones: TALElig1, TALElig2, and TALElig3. Immunoblot analysis of osmotically induced TALElig clones demonstrated 2- to 9-fold reductions in the expression levels of LigA and LigB, with the highest reductions being noted for TALElig1 and TALElig2, which were avirulent in vivo and nonrecoverable from animal tissues. This study reconfirms galactosidase activity in the saprophyte and suggests a role for LigA and LigB in pathogenesis. Collectively, this study demonstrates that TALEs are effective at reducing the expression of targeted genes within saprophytic and pathogenic strains of Leptospira spp., providing an additional genetic manipulation tool for this genus. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  15. Electromechanically driven microchopper for integration in microspectrometers based on LIGA technology

    NASA Astrophysics Data System (ADS)

    Krippner, Peter; Mohr, Juergen; Saile, Volker

    1999-09-01

    In recent years, microspectrometers made by the LIGA technology for the visible wavelength range have found their way into the market. Opening the wide field of spectral analysis in the infrared range, the concept of a highly transmissive hollow waveguide has been demonstrated successfully. In combination with linear detector arrays, hollow waveguide microspectrometers can be combined into handheld infrared spectrometer systems. The only obstacle to a miniaturized system is the lack of miniaturized light modulators. To solve this problem, a miniaturized light modulator has been developed. It consists of an oscillating stop driven by an electromagnetic actuator. It is made out of permalloy by means of LIGA micromechanics. Its outer dimensions of approx. 3.0 X 3.2 mm2 and a structure height of 280 micrometer allow it to be integrated into the plane of the entrance slit of the microspectrometer of about 20 mm to 30 mm size. The spectrometer has alignment structures to ensure positioning of the oscillating stop close to the entrance slit. This simplifies assembly. The actuator is excited by an hybrid integrated coil fixed by springs snapping into place during assembly. The maximum supply voltage of 5V allows the chopper to be used in low-voltage spectrometer systems, especially in handheld systems. The highest modulation frequency is more than 1 kHz, which is sufficient to work with the lead salt detectors commonly used. In this frequency range, detector noise is greatly attenuated compared to continuous-light operation. The paper contains an outline of the concept of the whole microspectrometer system. Experimental results are discussed to demonstrate the performance of the system.

  16. Surface microstructuring of biocompatible bone analogue material HAPEX using LIGA technique and embossing

    NASA Astrophysics Data System (ADS)

    Schneider, Andreas; Rea, Susan; Huq, Ejaz; Bonfield, William

    2003-04-01

    HAPEX is an artificial bone analogue composite based on hydroxyapatite and polyethylene, which can be applied for growth of bone cells. Due to its biocompatibility and favourable mechanical properties, HAPEX is used for orthopaedic implants like tympanic (middle ear) bones. The morphology of HAPEX surfaces is of high interest and it is believed that surface structuring on a micron scale might improve the growth conditions for bone cells. A new and simple approach for the microstructuring of HAPEX surfaces has been investigated using LIGA technique. LIGA is a combination of several processes, in particular lithography, electroplating and forming/moulding. For HAPEX surface structuring, arrays of dots, grids and lines with typical lateral dimension ranging from 5 μm to 50 μm were created on a chromium photomask and the patterns were transferred into thick SU-8 photoresist (structure height > 10 μm) by UV lithography. Subsequently, the SU-8 structures served as moulds for electroplating nickel on Si wafers and nickel substrates. The final nickel microstructures were used as embossing master for the HAPEX material. Embossing was carried out using a conventional press (> 500 hPa) with the facility to heat the master and the HAPEX. The temperature ranged from ambient to a few degrees above glass transition temperature (Tg) of HAPEX. The paper will include details of the fabrication process and process tolerances in lateral and vertical directions. Data obtained are correlated to the temperature used during embossing.

  17. Application of EEM fluorescence spectroscopy in understanding of the "LIGA" phenomenon in the Bay of Biscay (France)

    NASA Astrophysics Data System (ADS)

    Parot, Jérémie; Susperregui, Nicolas; Rouaud, Vanessa; Dubois, Laurent; Anglade, Nathalie; Parlanti, Edith

    2014-05-01

    Marine mucilage is present in all oceans over the world, and in particular in the Mediterranean Sea and in the Pacific Ocean. Surface water warming and hydrodynamic processes can favor the coalescence of marine mucilage, large marine aggregates representing an ephemeral and extreme habitat for biota. DOM is a heterogeneous, complex mixture of compounds, including extracellular polymeric substances (EPS), with wide ranging chemical properties and it is well known to interact with pollutants and to affect their transport and their fate in aquatic environment. The LIGA French research program focuses on tracing colloidal dissolved organic matter (DOM) sources and cycling in the Bay of Biscay (South Western French coast). This ephemeral phenomenon (called "LIGA" in the South West of France) has been observed more than 750 times since 2010. It presents a great ecological impact on marine ecosystems and has been shown to be concomitant with the development of pathogen organisms. A one-year intensive survey of fluorescent DOM was undertaken. From April 2013 until May 2014, water samples were monthly collected from the Adour River (main fresh water inputs) and from 2 sites in the Bay of Biscay at 3 depths of the water column (surface water, at the maximum of chlorophyll-a, and deep water). Moreover, intensified samplings took place from the appearance of the phenomenon twice a week during 4 weeks. UV/visible absorbance and excitation emission matrix (EEM) fluorescence spectroscopy combined with PARAFAC and PCA analyses have been used to characterize colloidal DOM in the Bay of Biscay in order to estimate DOM sources as well as spatial and temporal variability of DOM properties. The preliminary results, obtained for about 70 samples of this survey, have already highlighted spatial and temporal variations of DOM optical properties and a peculiar fluorescent component (exc300nm/em338nm) was detected while the LIGA phenomenon arises. The appearance of this specific

  18. Lateral thermal damage of mesoappendix and appendiceal base during laparoscopic appendectomy in children: comparison of the harmonic scalpel (Ultracision), bipolar coagulation (LigaSure), and thermal fusion technology (MiSeal).

    PubMed

    Pogorelić, Zenon; Katić, Josip; Mrklić, Ivana; Jerončić, Ana; Šušnjar, Tomislav; Jukić, Miro; Vilović, Katarina; Perko, Zdravko

    2017-05-15

    The aim of this study was to compare lateral thermal damage of mesoappendix and appendiceal base using three different instruments for sealing and cutting of mesoappendix. A total number of 99 patients (54 males and 45 females) who underwent laparoscopic appendectomy because of suspected appendicitis between December 2013 and May 2015 were enrolled in the study. The patients were divided in three groups based on instrument used for sealing of mesoappendix: group 1 (Ultracision; n = 36), group 2 (LigaSure; n = 32), and group 3 (MiSeal; n = 31). Lateral thermal damage, intraoperative and postoperative complications, duration of surgery, hospital stay, and economic value were compared within groups. The median age of patients was 14 y (range 3-17). A histopathologic analysis revealed a positive diagnosis of appendicitis in 84 patients (85%). The median lateral thermal damage on appendiceal base using Ultracision, LigaSure, and MiSeal was 0.10 mm, 0.16 mm, and 0.10 mm respectively, and on mesoappendix, 0.08 mm, 0.13 mm, and 0.08 mm, respectively. Significantly higher thermal damage was found on mesoappendix (P = 0.015) and appendiceal base (P = 0.012) in patients treated with LigaSure than in patients from other groups. There were no statistical differences among the groups regarding intraoperative and postoperative complications (P = 0.098). No significant difference in thermal damage between appendicitis and nonappendicitis group was found (P = 0.266). Using of Ultracision, LigaSure, and MiSeal for sealing of mesoappendix in laparoscopic appendectomy in children is safe and useful. LigaSure produces significantly greater lateral thermal damage compared with other instruments. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Serotonin and Dopamine Gene Variation and Theory of Mind Decoding Accuracy in Major Depression: A Preliminary Investigation.

    PubMed

    Zahavi, Arielle Y; Sabbagh, Mark A; Washburn, Dustin; Mazurka, Raegan; Bagby, R Michael; Strauss, John; Kennedy, James L; Ravindran, Arun; Harkness, Kate L

    2016-01-01

    Theory of mind-the ability to decode and reason about others' mental states-is a universal human skill and forms the basis of social cognition. Theory of mind accuracy is impaired in clinical conditions evidencing social impairment, including major depressive disorder. The current study is a preliminary investigation of the association of polymorphisms of the serotonin transporter (SLC6A4), dopamine transporter (DAT1), dopamine receptor D4 (DRD4), and catechol-O-methyl transferase (COMT) genes with theory of mind decoding in a sample of adults with major depression. Ninety-six young adults (38 depressed, 58 non-depressed) completed the 'Reading the Mind in the Eyes task' and a non-mentalistic control task. Genetic associations were only found for the depressed group. Specifically, superior accuracy in decoding mental states of a positive valence was seen in those homozygous for the long allele of the serotonin transporter gene, 9-allele carriers of DAT1, and long-allele carriers of DRD4. In contrast, superior accuracy in decoding mental states of a negative valence was seen in short-allele carriers of the serotonin transporter gene and 10/10 homozygotes of DAT1. Results are discussed in terms of their implications for integrating social cognitive and neurobiological models of etiology in major depression.

  20. Development of a W-band Serpentine Waveguide Amplifier based on a UV-LIGA Microfabricated Copper Circuit

    DTIC Science & Technology

    2013-03-01

    beam tunnel [5,6] for a high - power , wideband W- band traveling-wave tube (TWT) amplifier. UV-LIGA is also a promising technique at higher...wide- band , high - power operation of the amplifier [7, 8]. The interaction circuit consists of two traveling-wave stages separated by a power ...technique produces monolithic all-copper circuits, integrated with electron beam tunnel, suitable for high - power continuous-wave operation [1]. We

  1. Examining Impulse-Variability Theory and the Speed-Accuracy Trade-Off in Children's Overarm Throwing Performance.

    PubMed

    Molina, Sergio L; Stodden, David F

    2018-04-01

    This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.

  2. Applying Signal-Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment

    ERIC Educational Resources Information Center

    Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and…

  3. Delamination study of chip-to-chip bonding for a LIGA-based safety and arming system

    NASA Astrophysics Data System (ADS)

    Subramanian, Gowrishankar; Deeds, Michael; Cochran, Kevin R.; Raghavan, Raghu; Sandborn, Peter A.

    1999-08-01

    The development of a miniature underwater weapon safety and arming system requires reliable chip-to-chip bonding of die that contain microelectromechanical actuators and sensors fabricated using a LIGA MEMS fabrication process. Chip-to- chip bonding is associated for several different bond materials (indium solder, thermoplastic paste, thermoplastic film and epoxy film), and bonding configurations (with an alloy 42 spacer, silicon to ceramic, and silicon to silicon). Metrology using acoustic micro imaging has been developed to determine the fraction of delamination of samples.

  4. Improving the accuracy of Møller-Plesset perturbation theory with neural networks

    NASA Astrophysics Data System (ADS)

    McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.

    2017-10-01

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  5. Improving the accuracy of Møller-Plesset perturbation theory with neural networks.

    PubMed

    McGibbon, Robert T; Taube, Andrew G; Donchev, Alexander G; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L; Shaw, David E

    2017-10-28

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol -1 (root-mean-square error 0.09 kcal mol -1 ), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  6. Retrospective comparison of Traditional vs. LigaSure impact dissection during pancreatoduodenectomy: how to save money by using an expensive device.

    PubMed

    Piccinni, Giuseppe; Pasculli, Alessandro; D'Ambrosio, Erasmina; Gurrado, Angela; Lissidini, Germana; Testini, Mario

    2013-09-01

    Pancreatoduodenectomy is an exceptional procedure that requires an extensive dissection of the supramesocolic region extended to the first jejunal limb. Lymphadenectomy, required for cancer, increases the dissection surface. The extensive preparation of the area is traditionally conducted with bipolar ormonopolar instruments, while clips, ligatures, and sutures are used for haemostasis. LigaSure™ vessel sealing(LSVS; Valleylab, Boulder, CO) is a technology that obtains vessel closure by using the body's own collagen and elastin to create a permanent fusion zone. This is obtained by a combination of forceps pressure and radio frequency. This effect has been improved by the introduction of the Force Triad™ (Valleylab, Boulder,CO) energy platform, controlled by TissueFect™ (Valleylab, Boulder, CO) sensing technology. With this device, the surgeon is able to fuse vessels up to 7 mm, lymphatics, tissue bundles, and pulmonary vasculature in a fast-seal cycle of almost 4 seconds. In our daily practice of open surgery we observe a rapid improvement of abdominal drainage output with a drastic reduction of protein loss. Its practical significance is, in our opinion, that we obtain a rapid recovery of normal serum protein levels with a low number of blood/plasmasac transfusions and a real improvement of anastomosis healing. Moreover, the efficacy and the speed of work of the device allow us to reduce the operating time significantly but safely. We performed a retrospective analysis of the data of 20 pancreatic resections conducted both with traditional dissection and with the Liga-Sure Impact device with Force Triad platform in order to verify whether observed data were real. Our clinical results show that the use of the LigaSure Impact device with Force Triad energy platform is really useful in open surgery to save operating time, number of postoperative days, and hemoderivate administration.

  7. LigaSure vessel sealing system in laparoscopic Palomo varicocele ligation in children and adolescents.

    PubMed

    Marte, Antonio; Sabatino, Maria Domenica; Borrelli, Micaela; Cautiero, Pasquale; Romano, Mercedes; Vessella, Antonio; Parmeggiani, Pio

    2007-04-01

    We review our experience with laparoscopic Palomo varicocele ligation using the LigaSure device in children and adolescents. Between June 2003 and December 2004, 25 varicoceles were treated by laparoscopic Palomo varicocele ligation using LigaSure vascular sealing. Patient ages ranged from 10 to 19 years (mean, 14.5 years). Indications for surgery included grade II-III varicocele or ipsilateral testicular hypotrophy. One patient was affected by recurrent contralateral inguinal hernia and 2 presented with an ipsilateral patent processus vaginalis. We placed a 5-mm umbilical port for access, and kept pneumoperitoneum below 15 mm Hg. Under laparoscopic guidance, two additional ports of 3 and 5 mm were inserted in the lower right and left quadrants, respectively. Once the vessels were isolated, the vascular sealant was applied 3-4 times to ensure coagulation of the spermatic vessels; the vessels were then divided with laparoscopic 5-mm scissors. Inguinal hernia and patent processus vaginalis were treated according to Schier's technique. All procedures were performed in our day surgery facility. Mean operative time was 18 minutes, which is significantly less than the time required in a similar group of 12 patients who underwent laparoscopic clip ligation. There were no perioperative complications. Eleven of 16 patients recovered testicular size. Two patients had postoperative hydrocele: the first was treated successfully with scrotal aspiration, while the other patient required scrotal hydrocelectomy. Laparoscopic Palomo varicocele sealing can be performed safely and rapidly and is highly successful in correcting varicoceles in young males. We also found it to be the ideal technique to correct the associated inguinal hernia or patent processus vaginalis.

  8. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  9. [Upon scientific accuracy scheme at clinical specialties].

    PubMed

    Ortega Calvo, M

    2006-11-01

    Will be medical specialties like sciences in the future? Yes, progressively they will. Accuracy in clinical specialties will be dissimilar in the future because formal-logic mathematics, quantum physics advances and relativity theory utilities. Evidence based medicine is now helping to clinical specialties on scientific accuracy by the way of decision theory.

  10. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  11. Meta-analysis of randomized controlled trials comparing outcomes for stapled hemorrhoidopexy versus LigaSure hemorrhoidectomy for symptomatic hemorrhoids in adults.

    PubMed

    Lee, Ko-Chao; Chen, Hong-Hwa; Chung, Kuan-Chih; Hu, Wan-Hsiang; Chang, Chia-Lo; Lin, Shung-Eing; Tsai, Kai-Lung; Lu, Chien-Chang

    2013-01-01

    This purpose of the meta-analysis was to compare treatment outcomes for adult patients with symptomatic hemorrhoids treated by stapled hemorrhoidopexy or LigaSure hemorrhoidectomy. A search of public medical databases was made to identify randomized controlled trials (RCTs) comparing stapled hemorrhoidopexy (SH) with LigaSure hemorrhoidectomy (LH) for the treatment of adult patients with symptomatic grade 3 and grade 4 hemorrhoids. Postoperative pain as measured using a visual analog scale was the primary outcome, and rate of recurrent prolapse and postoperative bleeding were secondary outcome measures. Four RCTs were identified that met the inclusion criteria. Data for the pooled outcomes were analyzed using odds ratio (OR) analysis. None of the studies in the analysis indicated a significant difference between SH and LH for the outcomes VAS pain score, recurrence rate, or postoperative bleeding. Pooled analysis revealed a significant OR in favor of the SH method for recurrent prolapse (OR = 5.529, P = 0.016) for up to 2 years after surgery. No significant differences between the two methods were identified for VAS pain scores (OR = -1.060, P = 0.149) or postoperative bleeding OR = 1.188, P = 0.871). Pooled analysis of RCT results comparing SH to LH for symptomatic hemorrhoids revealed a significantly greater incidence of recurrent prolapse for SH. The two techniques were associated with similar levels of postoperative pain and postoperative bleeding. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  12. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  13. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  14. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  15. Applying signal-detection theory to the study of observer accuracy and bias in behavioral assessment.

    PubMed

    Lerman, Dorothea C; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and contingencies for scoring accurately would bias responding reliably. Experiment 2 focused on one variable (specificity of the operational definition) that we hypothesized might decrease the likelihood of bias. The effects of social consequences and information about expected behavior change were examined in Experiment 3. Results indicated that feedback and contingencies reliably biased responding and that the clarity of the definition only moderately affected this outcome.

  16. Highlights from the first ecancer-Liga Colombiana contra el Cancer conference, 17-18 November 2016, Bogota, Colombia.

    PubMed

    Castro, Carlos

    2017-01-01

    The first oncology conference organised by e cancer and the Liga Colombiana contra el Cancer took place on 17-18 November 2016 in Bogota. It was a highly successful event owing to the number of participants, the quality of the speakers, and the academic programme. Around 48 professors from 8 different countries came and shared their knowledge and experience of cancer management. They also talked about the most recent developments noted or achieved in this area. The keynote speech from Dr Nubia Muñoz was of great interest which was related to the safety of a HPV vaccine and the implications of a mass vaccination programme in developing countries. Geriatric oncology and palliative care were also topics that sparked great interest during the event.

  17. The effect of stimulus strength on the speed and accuracy of a perceptual decision.

    PubMed

    Palmer, John; Huk, Alexander C; Shadlen, Michael N

    2005-05-02

    Both the speed and the accuracy of a perceptual judgment depend on the strength of the sensory stimulation. When stimulus strength is high, accuracy is high and response time is fast; when stimulus strength is low, accuracy is low and response time is slow. Although the psychometric function is well established as a tool for analyzing the relationship between accuracy and stimulus strength, the corresponding chronometric function for the relationship between response time and stimulus strength has not received as much consideration. In this article, we describe a theory of perceptual decision making based on a diffusion model. In it, a decision is based on the additive accumulation of sensory evidence over time to a bound. Combined with simple scaling assumptions, the proportional-rate and power-rate diffusion models predict simple analytic expressions for both the chronometric and psychometric functions. In a series of psychophysical experiments, we show that this theory accounts for response time and accuracy as a function of both stimulus strength and speed-accuracy instructions. In particular, the results demonstrate a close coupling between response time and accuracy. The theory is also shown to subsume the predictions of Piéron's Law, a power function dependence of response time on stimulus strength. The theory's analytic chronometric function allows one to extend theories of accuracy to response time.

  18. Accuracy of a Classical Test Theory-Based Procedure for Estimating the Reliability of a Multistage Test. Research Report. ETS RR-17-02

    ERIC Educational Resources Information Center

    Kim, Sooyeon; Livingston, Samuel A.

    2017-01-01

    The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…

  19. Accuracy of theory for calculating electron impact ionization of molecules

    NASA Astrophysics Data System (ADS)

    Chaluvadi, Hari Hara Kumar

    The study of electron impact single ionization of atoms and molecules has provided valuable information about fundamental collisions. The most detailed information is obtained from triple differential cross sections (TDCS) in which the energy and momentum of all three final state particles are determined. These cross sections are much more difficult for theory since the detailed kinematics of the experiment become important. There are many theoretical approximations for ionization of molecules. One of the successful methods is the molecular 3-body distorted wave (M3DW) approximation. One of the strengths of the DW approximation is that it can be applied for any energy and any size molecule. One of the approximations that has been made to significantly reduce the required computer time is the OAMO (orientation averaged molecular orbital) approximation. In this dissertation, the accuracy of the M3DW-OAMO is tested for different molecules. Surprisingly, the M3DW-OAMO approximation yields reasonably good agreement with experiment for ionization of H2 and N2. On the other hand, the M3DW-OAMO results for ionization of CH4, NH3 and DNA derivative molecules did not agree very well with experiment. Consequently, we proposed the M3DW with a proper average (PA) calculation. In this dissertation, it is shown that the M3DW-PA calculations for CH4 and SF6 are in much better agreement with experimental data than the M3DW-OAMO results.

  20. Highlights from the first ecancer–Liga Colombiana contra el Cancer conference, 17–18 November 2016, Bogota, Colombia

    PubMed Central

    Castro, Carlos

    2017-01-01

    The first oncology conference organised by ecancer and the Liga Colombiana contra el Cancer took place on 17–18 November 2016 in Bogota. It was a highly successful event owing to the number of participants, the quality of the speakers, and the academic programme. Around 48 professors from 8 different countries came and shared their knowledge and experience of cancer management. They also talked about the most recent developments noted or achieved in this area. The keynote speech from Dr Nubia Muñoz was of great interest which was related to the safety of a HPV vaccine and the implications of a mass vaccination programme in developing countries. Geriatric oncology and palliative care were also topics that sparked great interest during the event. PMID:28487749

  1. Electrocautery versus Ultracision versus LigaSure in Surgical Management of Hyperhidrosis.

    PubMed

    Divisi, Duilio; Di Leonardo, Gabriella; De Vico, Andrea; Crisci, Roberto

    2015-12-01

    The aim of the study was to evaluate the sympathectomy procedures for primary hyperhidrosis in terms of complications and effectiveness. From January 2010 to September 2012 we performed 130 sympathectomies in 65 patients, 27 males (42%) and 38 females (58%). Electrocoagulation was used in 20 procedures (15%), ultrasonic scalpel in 54 (42%), and radiofrequency dissector in 56 (43%). Seven patients (11%) underwent bilateral sympathectomy in the same surgical session, while in 58 (89%) the right surgical approach was delayed 30 days from the first procedure. We noticed 12 complications (9%): (a) chest pain in 6 patients (4 with electrocoagulation, 1 with ultrasonic scalpel, and 1 with radiofrequency dissector), which disappeared in 20 ± 1 day; (b) paresthesias in 3 electrocoagulation patients, was solved in 23 ± 5 days; (c) bradycardia in 1 ultrasonic patient, normalized in 4th postoperative hour; (d) unilateral relapse in 2 electrocoagulation patients after the second side approach, positively treated in 1 patient by resurgery in video-assisted thoracoscopy (VAT). The quality-adjusted life year and the quality of life evaluation revealed a statistically significant improvement (p = 0.02) in excessive sweating and general satisfaction after surgery, with Ultracision and LigaSure showing better findings than electrocoagulation. The latest generation devices offered greater efficacy in the treatment of hyperhidrosis, minimizing complications and facilitating the resumption of normal work and social activity of patients. Georg Thieme Verlag KG Stuttgart · New York.

  2. Test expectancy affects metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  3. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  4. Kappa and Rater Accuracy: Paradigms and Parameters.

    PubMed

    Conger, Anthony J

    2017-12-01

    Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa (κ). Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another (concordance), using both nonstochastic and stochastic category membership. Using a probability model to express category assignments in terms of rater accuracy and random error, it is shown that observed agreement (Po) depends only on rater accuracy and number of categories; however, expected agreement (Pe) and κ depend additionally on category frequencies. Moreover, category frequencies affect Pe and κ solely through the variance of the category proportions, regardless of the specific frequencies underlying the variance. Paradoxically, some judgment paradigms involving stochastic categories are shown to yield higher κ values than their nonstochastic counterparts. Using the stated probability model, assignments to categories were generated for 552 combinations of paradigms, rater and category parameters, category frequencies, and number of stimuli. Observed means and standard errors for Po, Pe, and κ were fully consistent with theory expectations. Guidelines for interpretation of rater accuracy and reliability are offered, along with a discussion of alternatives to the basic model.

  5. Accuracy of Spencer-Attix cavity theory and calculations of fluence correction factors for the air kerma formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Russa, D. J.; Rogers, D. W. O.

    EGSnrc calculations of ion chamber response and Spencer-Attix (SA) restricted stopping-power ratios are used to test the assumptions of the SA cavity theory and to assess the accuracy of this theory as it applies to the air kerma formalism for {sup 60}Co beams. Consistent with previous reports, the EGSnrc calculations show that the SA cavity theory, as it is normally applied, requires a correction for the perturbation of the charged particle fluence (K{sub fl}) by the presence of the cavity. The need for K{sub fl} corrections arises from the fact that the standard prescription for choosing the low-energy threshold {Delta}more » in the SA restricted stopping-power ratio consistently underestimates the values of {Delta} needed if no perturbation to the fluence is assumed. The use of fluence corrections can be avoided by appropriately choosing {Delta}, but it is not clear how {Delta} can be calculated from first principles. Values of {Delta} required to avoid K{sub fl} corrections were found to be consistently higher than {Delta} values obtained using the conventional approach and are also observed to be dependent on the composition of the wall in addition to the cavity size. Values of K{sub fl} have been calculated for many of the graphite-walled ion chambers used by the national metrology institutes around the world and found to be within 0.04% of unity in all cases, with an uncertainty of about 0.02%.« less

  6. Discussion on accuracy degree evaluation of accident velocity reconstruction model

    NASA Astrophysics Data System (ADS)

    Zou, Tiefang; Dai, Yingbiao; Cai, Ming; Liu, Jike

    In order to investigate the applicability of accident velocity reconstruction model in different cases, a method used to evaluate accuracy degree of accident velocity reconstruction model is given. Based on pre-crash velocity in theory and calculation, an accuracy degree evaluation formula is obtained. With a numerical simulation case, Accuracy degrees and applicability of two accident velocity reconstruction models are analyzed; results show that this method is feasible in practice.

  7. [As the twig is bent, so is the tree inclined: children and the Liga Brasileira de Higiene Mental's eugenic programs].

    PubMed

    Reis, J R

    2000-01-01

    Created in the early 1920s, at a moment when the country's psychiatric field was embracing the preventive outlook, the Liga Brasileira de Higiene Mental included within its members the elite of Brazilian psychiatry, along with a number of physicians and intellectuals. The article discusses the institution's proposals for intervention among children. The league ended up incorporating into its theoretical arsenal the basic themes of mental hygiene and eugenics as part of its general goal of collaborating in Brazil's process of "racial sanitation". With this objective in mind, and viewing the child as a "pre-citizen" who is a "fundamental part within the man of the future", league members included the children's issue in their projects and saw an imperative need for mental health care from early ages on.

  8. A promising tool to achieve chemical accuracy for density functional theory calculations on Y-NO homolysis bond dissociation energies.

    PubMed

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.

  9. Safety of LigaSure in recurrent laryngeal nerve dissection-porcine model using continuous monitoring.

    PubMed

    Dionigi, Gianlorenzo; Chiang, Feng-Yu; Kim, Hoon Yub; Randolph, Gregory W; Mangano, Alberto; Chang, Pi-Ying; Lu, I-Cheng; Lin, Yi-Chu; Chen, Hui-Chun; Wu, Che-Wei

    2017-07-01

    This study investigated recurrent laryngeal nerve (RLN) real-time electromyography (EMG) data to define optimal safety parameters of the LigaSure Small Jaw (LSJ) instrument during thyroidectomy. Prospective animal model. Dynamic EMG tracings were recorded from 32 RLNs (16 piglets) during various applications of LSJ around using continuous electrophysiologic monitoring. At varying distances from the RLN, the LSJ was activated (activation study). The LSJ was also applied to the RLN at timed intervals after activation and after a cooling maneuver through placement on the sternocleidomastoid muscle (cooling study). In the activation study, there was no adverse EMG event at 2 to 5 mm distance (16 RLNs, 96 tests). In the cooling study, there was no adverse EMG event after 2-second cooling time (16 RLNs, 96 tests) or after the LSJ cooling maneuver on the surrounding muscle before reaching the RLNs (8 RLNs, 24 tests). Based on EMG functional assessment, the safe distance for LSJ activation was 2 mm. Further LSJ-RLN contact was safe if the LSJ was cooled for more than 2 seconds or cooled by touch muscle maneuver. The LSJ should be used with these distance and time parameters in mind to avoid RLN injury. N/A. Laryngoscope, 127:1724-1729, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Kappa and Rater Accuracy: Paradigms and Parameters

    ERIC Educational Resources Information Center

    Conger, Anthony J.

    2017-01-01

    Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa. Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another…

  11. A Promising Tool to Achieve Chemical Accuracy for Density Functional Theory Calculations on Y-NO Homolysis Bond Dissociation Energies

    PubMed Central

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol−1) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol−1 to 0.15 and 0.18 kcal·mol−1, respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol−1. This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules. PMID:22942689

  12. Quadratic canonical transformation theory and higher order density matrices.

    PubMed

    Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic

    2009-03-28

    Canonical transformation (CT) theory provides a rigorously size-extensive description of dynamic correlation in multireference systems, with an accuracy superior to and cost scaling lower than complete active space second order perturbation theory. Here we expand our previous theory by investigating (i) a commutator approximation that is applied at quadratic, as opposed to linear, order in the effective Hamiltonian, and (ii) incorporation of the three-body reduced density matrix in the operator and density matrix decompositions. The quadratic commutator approximation improves CT's accuracy when used with a single-determinant reference, repairing the previous formal disadvantage of the single-reference linear CT theory relative to singles and doubles coupled cluster theory. Calculations on the BH and HF binding curves confirm this improvement. In multireference systems, the three-body reduced density matrix increases the overall accuracy of the CT theory. Tests on the H(2)O and N(2) binding curves yield results highly competitive with expensive state-of-the-art multireference methods, such as the multireference Davidson-corrected configuration interaction (MRCI+Q), averaged coupled pair functional, and averaged quadratic coupled cluster theories.

  13. Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.

    PubMed

    Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland

    2009-06-09

    The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.

  14. Is There Evidence for a Mixture of Processes in Speed-Accuracy Trade-Off Behavior?

    PubMed

    van Maanen, Leendert

    2016-01-01

    The speed-accuracy trade-off (SAT) effect refers to the behavioral trade-off between fast yet error-prone respones and accurate but slow responses. Multiple theories on the cognitive mechanisms behind SAT exist. One theory assumes that SAT is a consequence of strategically adjusting the amount of evidence required for overt behaviors, such as perceptual choices. Another theory hypothesizes that SAT is the consequence of the mixture of multiple categorically different cognitive processes. In this paper, these theories are disambiguated by assessing whether the fixed-point property of mixture distributions holds, in both simulations and data. I conclude that, at least for perceptual decision making, there is no evidence for a mixture of different cognitive processes to trade off accuracy of responding for speed. Copyright © 2016 Cognitive Science Society, Inc.

  15. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  16. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    ERIC Educational Resources Information Center

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  17. Medical accuracy in sexuality education: ideology and the scientific process.

    PubMed

    Santelli, John S

    2008-10-01

    Recently, many states have implemented requirements for scientific or medical accuracy in sexuality education and HIV prevention programs. Although seemingly uncontroversial, these requirements respond to the increasing injection of ideology into sexuality education, as represented by abstinence-only programs. I describe the process by which health professionals and government advisory groups within the United States reach scientific consensus and review the legal requirements and definitions for medical accuracy. Key elements of this scientific process include the weight of scientific evidence, the importance of scientific theory, peer review, and recognition by mainstream scientific and health organizations. I propose a concise definition of medical accuracy that may be useful to policymakers, health educators, and other health practitioners.

  18. Accuracy increase of self-compensator

    NASA Astrophysics Data System (ADS)

    Zhambalova, S. Ts; Vinogradova, A. A.

    2018-03-01

    In this paper, the authors consider a self-compensation system and a method for increasing its accuracy, without compromising the condition of the information theory of measuring devices. The result can be achieved using the pulse control of the tracking system in the dead zone (the zone of the proportional section of the amplifier's characteristic). Pulse control allows one to increase the control power, but the input signal of the amplifier is infinitesimal. To do this, the authors use the conversion scheme for the input quantity. It is also possible to reduce the dead band, but the system becomes unstable. The amount of information received from the instrument, correcting circuits complicates the system, and, reducing the feedback coefficient dramatically, reduces the speed. Thanks to this, without compromising the measurement condition, the authors increase the accuracy of the self-compensation system. The implementation technique allows increasing the power of the input signal by many orders of magnitude.

  19. d'plus: A program to calculate accuracy and bias measures from detection and discrimination data.

    PubMed

    Macmillan, N A; Creelman, C D

    1997-01-01

    The program d'plus calculates accuracy (sensitivity) and response-bias parameters using Signal Detection Theory. Choice Theory, and 'nonparametric' models. is is appropriate for data from one-interval, two- and three-interval forced-choice, same different, ABX, and oddity experimental paradigms.

  20. Development of an Enzyme-Linked Immunosorbent Assay Using a Recombinant LigA Fragment Comprising Repeat Domains 4 to 7.5 as an Antigen for Diagnosis of Equine Leptospirosis

    PubMed Central

    Yan, Weiwei; Saleem, Muhammad Hassan; McDonough, Patrick; McDonough, Sean P.; Divers, Thomas J.

    2013-01-01

    Leptospira immunoglobulin (Ig)-like (Lig) proteins are a novel family of surface-associated proteins in which the N-terminal 630 amino acids are conserved. In this study, we truncated the LigA conserved region into 7 fragments comprising the 1st to 3rd (LigACon1-3), 4th to 7.5th (LigACon4-7.5), 4th (LigACon4), 4.5th to 5.5th (LigACon4.5–5.5), 5.5th to 6.5th (LigACon5.5–6.5), 4th to 5th (LigACon4-5), and 6th to 7.5th (LigACon6-7.5) repeat domains. All 7 recombinant Lig proteins were screened using a slot-shaped dot blot assay for the diagnosis of equine leptospirosis. Our results showed that LigACon4-7.5 is the best candidate diagnostic antigen in a slot-shaped dot blot assay. LigACon4-7.5 was further evaluated as an indirect enzyme-linked immunosorbent assay (ELISA) antigen for the detection of Leptospira antibodies in equine sera. This assay was evaluated with equine sera (n = 60) that were microscopic agglutination test (MAT) negative and sera (n = 220) that were MAT positive to the 5 serovars that most commonly cause equine leptospirosis. The indirect ELISA results showed that at a single serum dilution of 1:250, the sensitivity and specificity of ELISA were 80.0% and 87.2%, respectively, compared to those of MAT. In conclusion, an indirect ELISA was developed utilizing a recombinant LigA fragment comprising the 4th to 7.5th repeat domain (LigACon4-7.5) as a diagnostic antigen for equine leptospirosis. This ELISA was found to be sensitive and specific, and it yielded results that concurred with those of the standard MAT. PMID:23720368

  1. A Nonparametric Approach to Estimate Classification Accuracy and Consistency

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2014-01-01

    When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…

  2. Design and fabrication of a 1-DOF drive mode and 2-DOF sense mode micro-gyroscope using SU-8 based UV-LIGA process

    NASA Astrophysics Data System (ADS)

    Verma, Payal; Juneja, Sucheta; Savelyev, Dmitry A.; Khonina, Svetlana N.; Gopal, Ram

    2016-04-01

    This paper presents design and fabrication of a 1-DOF (degree-of-freedom) drive mode and 2-DOF sense mode micro-gyroscope. It is an inherently robust structure and offers a high sense frequency bandwidth. The proposed design utilizes resonance of the1-DOF drive mode oscillator and employs dynamic amplification concept in sense modes to increase the sensitivity while maintaining robustness. The 2-DOF in the sense direction renders the device immune to process imperfections and environmental effects. The design is simulated using FEA software (CoventorWare®). The device is designed considering process compatibility with SU-8 based UV-LIGA process, which is an economical fabrication technique. The complete fabrication process is presented along with SEM images of the fabricated device. The device has 9 µm thick Nickel as the key structural layer with an overall reduced key structure size of 2.2 mm by 2.1 mm.

  3. Kinematical Test Theories for Special Relativity

    NASA Astrophysics Data System (ADS)

    Lämmerzahl, Claus; Braxmaier, Claus; Dittus, Hansjörg; Müller, Holger; Peters, Achim; Schiller, Stephan

    A comparison of certain kinematical test theories for Special Relativity including the Robertson and Mansouri-Sext test theories is presented and the accuracy of the experimental results testing Special Relativity are expressed in terms of the parameters appearing in these test theories. The theoretical results are applied to the most precise experimental results obtained recently for the isotropy of light propagation and the constancy of the speed of light.

  4. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    The accuracy of thematic map products is not spatially homogenous, but instead variable across most landscapes. Properly analyzing and representing the spatial distribution (pattern) of thematic map accuracy would provide valuable user information for assessing appropriate applic...

  5. Recollection is a continuous process: implications for dual-process theories of recognition memory.

    PubMed

    Mickes, Laura; Wais, Peter E; Wixted, John T

    2009-04-01

    Dual-process theory, which holds that recognition decisions can be based on recollection or familiarity, has long seemed incompatible with signal detection theory, which holds that recognition decisions are based on a singular, continuous memory-strength variable. Formal dual-process models typically regard familiarity as a continuous process (i.e., familiarity comes in degrees), but they construe recollection as a categorical process (i.e., recollection either occurs or does not occur). A continuous process is characterized by a graded relationship between confidence and accuracy, whereas a categorical process is characterized by a binary relationship such that high confidence is associated with high accuracy but all lower degrees of confidence are associated with chance accuracy. Using a source-memory procedure, we found that the relationship between confidence and source-recollection accuracy was graded. Because recollection, like familiarity, is a continuous process, dual-process theory is more compatible with signal detection theory than previously thought.

  6. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    This paper presents a fuzzy set-based method of mapping spatial accuracy of thematic map and computing several ecological indicators while taking into account spatial variation of accuracy associated with different land cover types and other factors (e.g., slope, soil type, etc.)...

  7. The influence of delaying judgments of learning on metacognitive accuracy: a meta-analytic review.

    PubMed

    Rhodes, Matthew G; Tauber, Sarah K

    2011-01-01

    Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the delayed JOL effect. The meta-analyses reported in the current study examined the predominant theoretical accounts as well as potential moderators of the delayed JOL effect. The first meta-analysis examined the relative accuracy of delayed compared with immediate JOLs across 4,554 participants (112 effect sizes) through gamma correlations between JOLs and memory accuracy. Those data showed that delaying JOLs leads to robust benefits to relative accuracy (g = 0.93). The second meta-analysis examined memory performance for delayed compared with immediate JOLs across 3,807 participants (98 effect sizes). Those data showed that delayed JOLs result in a modest but reliable benefit for memory performance relative to immediate JOLs (g = 0.08). Findings from these meta-analyses are well accommodated by theories suggesting that delayed JOL accuracy reflects access to more diagnostic information from long-term memory rather than being a by-product of a retrieval opportunity. However, these data also suggest that theories proposing that the delayed JOL effect results from a memorial benefit or the match between the cues available for JOLs and those available at test may also provide viable explanatory mechanisms necessary for a comprehensive account.

  8. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    PubMed

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  9. Modeling Individual Differences in Response Time and Accuracy in Numeracy

    PubMed Central

    Ratcliff, Roger; Thompson, Clarissa A.; McKoon, Gail

    2015-01-01

    In the study of numeracy, some hypotheses have been based on response time (RT) as a dependent variable and some on accuracy, and considerable controversy has arisen about the presence or absence of correlations between RT and accuracy, between RT or accuracy and individual differences like IQ and math ability, and between various numeracy tasks. In this article, we show that an integration of the two dependent variables is required, which we accomplish with a theory-based model of decision making. We report data from four tasks: numerosity discrimination, number discrimination, memory for two-digit numbers, and memory for three-digit numbers. Accuracy correlated across tasks, as did RTs. However, the negative correlations that might be expected between RT and accuracy were not obtained; if a subject was accurate, it did not mean that they were fast (and vice versa). When the diffusion decision-making model was applied to the data (Ratcliff, 1978), we found significant correlations across the tasks between the quality of the numeracy information (drift rate) driving the decision process and between the speed/ accuracy criterion settings, suggesting that similar numeracy skills and similar speed-accuracy settings are involved in the four tasks. In the model, accuracy is related to drift rate and RT is related to speed-accuracy criteria, but drift rate and criteria are not related to each other across subjects. This provides a theoretical basis for understanding why negative correlations were not obtained between accuracy and RT. We also manipulated criteria by instructing subjects to maximize either speed or accuracy, but still found correlations between the criteria settings between and within tasks, suggesting that the settings may represent an individual trait that can be modulated but not equated across subjects. Our results demonstrate that a decision-making model may provide a way to reconcile inconsistent and sometimes contradictory results in numeracy

  10. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Trócsányi, Zoltán; Somogyi, Gábor

    2008-10-01

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  11. Diagnostic Accuracy of Recombinant Immunoglobulin-like Protein A-Based IgM ELISA for the Early Diagnosis of Leptospirosis in the Philippines.

    PubMed

    Kitashoji, Emi; Koizumi, Nobuo; Lacuesta, Talitha Lea V; Usuda, Daisuke; Ribo, Maricel R; Tria, Edith S; Go, Winston S; Kojiro, Maiko; Parry, Christopher M; Dimaano, Efren M; Villarama, Jose B; Ohnishi, Makoto; Suzuki, Motoi; Ariyoshi, Koya

    2015-01-01

    Leptospirosis is an important but largely under-recognized public health problem in the tropics. Establishment of highly sensitive and specific laboratory diagnosis is essential to reveal the magnitude of problem and to improve treatment. This study aimed to evaluate the diagnostic accuracy of a recombinant LigA protein based IgM ELISA during outbreaks in the clinical-setting of a highly endemic country. A prospective study was conducted from October 2011 to September 2013 at a national referral hospital for infectious diseases in Manila, Philippines. Patients who were hospitalized with clinically suspected leptospirosis were enrolled. Plasma and urine were collected on admission and/or at discharge and tested using the LigA-IgM ELISA and a whole cell-based IgM ELISA. Sensitivity and specificity of these tests were evaluated with cases diagnosed by microscopic agglutination test (MAT), culture and LAMP as the composite reference standard and blood bank donors as healthy controls: the mean+3 standard deviation optical density value of healthy controls was used as the cut-off limit (0.062 for the LigA-IgM ELISA and 0.691 for the whole cell-based IgM ELISA). Of 304 patients enrolled in the study, 270 (89.1%) were male and the median age was 30.5 years; 167 (54.9%) were laboratory confirmed. The sensitivity and ROC curve AUC for the LigA-IgM ELISA was significantly greater than the whole cell-based IgM ELISA (69.5% vs. 54.3%, p<0.01; 0.90 vs. 0.82, p<0.01) on admission, but not at discharge. The specificity of LigA-IgM ELISA and whole cell-based IgM ELISA were not significantly different (98% vs. 97%). Among 158 MAT negative patients, 53 and 28 were positive by LigA- and whole cell-based IgM ELISA, respectively; if the laboratory confirmation was re-defined by LigA-IgM ELISA and LAMP, the clinical findings were more characteristic of leptospirosis than the diagnosis based on MAT/culture/LAMP. The newly developed LigA-IgM ELISA is more sensitive than the whole cell

  12. Diagnostic Accuracy of Recombinant Immunoglobulin-like Protein A-Based IgM ELISA for the Early Diagnosis of Leptospirosis in the Philippines

    PubMed Central

    Kitashoji, Emi; Koizumi, Nobuo; Lacuesta, Talitha Lea V.; Usuda, Daisuke; Ribo, Maricel R.; Tria, Edith S.; Go, Winston S.; Kojiro, Maiko; Parry, Christopher M.; Dimaano, Efren M.; Villarama, Jose B.; Ohnishi, Makoto; Suzuki, Motoi; Ariyoshi, Koya

    2015-01-01

    Background Leptospirosis is an important but largely under-recognized public health problem in the tropics. Establishment of highly sensitive and specific laboratory diagnosis is essential to reveal the magnitude of problem and to improve treatment. This study aimed to evaluate the diagnostic accuracy of a recombinant LigA protein based IgM ELISA during outbreaks in the clinical-setting of a highly endemic country. Methodology/Principal Findings A prospective study was conducted from October 2011 to September 2013 at a national referral hospital for infectious diseases in Manila, Philippines. Patients who were hospitalized with clinically suspected leptospirosis were enrolled. Plasma and urine were collected on admission and/or at discharge and tested using the LigA-IgM ELISA and a whole cell-based IgM ELISA. Sensitivity and specificity of these tests were evaluated with cases diagnosed by microscopic agglutination test (MAT), culture and LAMP as the composite reference standard and blood bank donors as healthy controls: the mean+3 standard deviation optical density value of healthy controls was used as the cut-off limit (0.062 for the LigA-IgM ELISA and 0.691 for the whole cell-based IgM ELISA). Of 304 patients enrolled in the study, 270 (89.1%) were male and the median age was 30.5 years; 167 (54.9%) were laboratory confirmed. The sensitivity and ROC curve AUC for the LigA-IgM ELISA was significantly greater than the whole cell-based IgM ELISA (69.5% vs. 54.3%, p<0.01; 0.90 vs. 0.82, p<0.01) on admission, but not at discharge. The specificity of LigA-IgM ELISA and whole cell-based IgM ELISA were not significantly different (98% vs. 97%). Among 158 MAT negative patients, 53 and 28 were positive by LigA- and whole cell-based IgM ELISA, respectively; if the laboratory confirmation was re-defined by LigA-IgM ELISA and LAMP, the clinical findings were more characteristic of leptospirosis than the diagnosis based on MAT/culture/LAMP. Conclusions/Significance The newly

  13. An Integrated Microfabricated Chip with Double Functions as an Ion Source and Air Pump Based on LIGA Technology.

    PubMed

    Li, Hua; Jiang, Linxiu; Guo, Chaoqun; Zhu, Jianmin; Jiang, Yongrong; Chen, Zhencheng

    2017-01-04

    The injection and ionization of volatile organic compounds (VOA) by an integrated chip is experimentally analyzed in this paper. The integrated chip consists of a needle-to-cylinder electrode mounting on the Polymethyl Methacrylate (PMMA) substrate. The needle-to-cylinder electrode is designed and fabricated by Lithographie, Galvanoformung and Abformung (LIGA) technology. In this paper, the needle is connected to a negative power supply of -5 kV and used as the cathode; the cylinder electrodes are composed of two arrays of cylinders and serve as the anode. The ionic wind is produced based on corona and glow discharges of needle-to-cylinder electrodes. The experimental setup is designed to observe the properties of the needle-to-cylinder discharge and prove its functions as an ion source and air pump. In summary, the main results are as follows: (1) the ionic wind velocity produced by the chip is about 0.79 m/s at an applied voltage of -3300 V; (2) acetic acid and ammonia water can be injected through the chip, which is proved by pH test paper; and (3) the current measured by a Faraday cup is about 10 pA for acetic acid and ammonia with an applied voltage of -3185 V. The integrated chip is promising for portable analytical instruments, such as ion mobility spectrometry (IMS), field asymmetric ion mobility spectrometry (FAIMS), and mass spectrometry (MS).

  14. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  15. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  16. Recent advances in analytical satellite theory

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1978-01-01

    Recent work on analytical satellite perturbation theory has involved the completion of a revision to 4th order for zonal harmonics, the addition of a treatment for ocean tides, an extension of the treatment for the noninertial reference system, and the completion of a theory for direct solar-radiation pressure and earth-albedo pressure. Combined with a theory for tesseral-harmonics, lunisolar, and body-tide perturbations, these formulations provide a comprehensive orbit-computation program. Detailed comparisons with numerical integration and observations are presented to assess the accuracy of each theoretical development.

  17. X-ray bursts: Observation versus theory

    NASA Technical Reports Server (NTRS)

    Lewin, W. H. G.

    1981-01-01

    Results of various observations of common type I X-ray bursts are discussed with respect to the theory of thermonuclear flashes in the surface layers of accreting neutron stars. Topics covered include burst profiles; irregular burst intervals; rise and decay times and the role of hydrogen; the accuracy of source distances; accuracy in radii determination; radius increase early in the burst; the super Eddington limit; temperatures at burst maximum; and the role of the magnetic field.

  18. Spatial accuracy assessment in natural resources and environmental sciences: Second International Symposium

    Treesearch

    H. Todd Mowrer; Raymond L. Czaplewski; R. H. Hamre

    1996-01-01

    This international symposium on theory and techniques for assessing the accuracy of spatial data and spatial analyses included more than ninety presentations by representatives from government, academic, and private institutions in over twenty countries throughout the world. To encourage interactions across disciplines, presentations in the general subject areas of...

  19. A new numerical theory of Earth rotation

    NASA Astrophysics Data System (ADS)

    Gerlach, Enrico; Klioner, Sergei; Soffel, Michael

    2012-08-01

    Nowadays the rotation of the Earth can be observed with an accuracy of about 0.01 milliarcseconds (mas ), while theoretical models are able to describe this motion at a level of 1 mas. This mismatch is partly due to the enormous complexity of the involved processes, operating on different time scales and driven by a large variety of physical effects. But al so partly due to the used models, which often use simplified and linearized equations to obtain the solution analytically. In this work we present our new numerical theory of the rotation of the Earth. The model underlying the theory is fully compatible with the post - Newtonian approximation of general relativity and is formulated using ordinary differential equations for the angles describing the orientation of the Earth (or its particular layers) in the GCRS. These equations are then solved numerically to describe the rotational motion with highest accuracy. Being initially developed for a rigid Earth our theory was extended towards a more realistic Earth model. In particular, we included 3 different layers (crust, fluid outer core and solid inner core) and all important coupling torques between them as well as all important effects of non - rigidity, such as elastic deformation, relative angular momenta due to atmosphere and ocean etc. In our presentation we will describe the details of our work and compare i t to the currently used models of Earth rotation. Further, we discuss possible applications of our numerical theory to obtain high - accuracy models of rotational motion of other celestial bodies such as Mercury.

  20. Exploration of the Components of Children's Reading Comprehension Using Rauding Theory.

    ERIC Educational Resources Information Center

    Rupley, William H.; And Others

    A study explored an application of rauding theory to the developmental components that contribute to elementary-age children's reading comprehension. The relationships among cognitive power, auditory accuracy level, pronunciation (word recognition) level, rauding (comprehension) accuracy level, rauding rate (reading rate) level, and rauding…

  1. Solvatochromic shifts from coupled-cluster theory embedded in density functional theory

    NASA Astrophysics Data System (ADS)

    Höfener, Sebastian; Gomes, André Severo Pereira; Visscher, Lucas

    2013-09-01

    Building on the framework recently reported for determining general response properties for frozen-density embedding [S. Höfener, A. S. P. Gomes, and L. Visscher, J. Chem. Phys. 136, 044104 (2012)], 10.1063/1.3675845, in this work we report a first implementation of an embedded coupled-cluster in density-functional theory (CC-in-DFT) scheme for electronic excitations, where only the response of the active subsystem is taken into account. The formalism is applied to the calculation of coupled-cluster excitation energies of water and uracil in aqueous solution. We find that the CC-in-DFT results are in good agreement with reference calculations and experimental results. The accuracy of calculations is mainly sensitive to factors influencing the correlation treatment (basis set quality, truncation of the cluster operator) and to the embedding treatment of the ground-state (choice of density functionals). This allows for efficient approximations at the excited state calculation step without compromising the accuracy. This approximate scheme makes it possible to use a first principles approach to investigate environment effects with specific interactions at coupled-cluster level of theory at a cost comparable to that of calculations of the individual subsystems in vacuum.

  2. Vitreous carbon mask substrate for X-ray lithography

    DOEpatents

    Aigeldinger, Georg [Livermore, CA; Skala, Dawn M [Fremont, CA; Griffiths, Stewart K [Livermore, CA; Talin, Albert Alec [Livermore, CA; Losey, Matthew W [Livermore, CA; Yang, Chu-Yeu Peter [Dublin, CA

    2009-10-27

    The present invention is directed to the use of vitreous carbon as a substrate material for providing masks for X-ray lithography. The new substrate also enables a small thickness of the mask absorber used to pattern the resist, and this enables improved mask accuracy. An alternative embodiment comprised the use of vitreous carbon as a LIGA substrate wherein the VC wafer blank is etched in a reactive ion plasma after which an X-ray resist is bonded. This surface treatment provides a surface enabling good adhesion of the X-ray photoresist and subsequent nucleation and adhesion of the electrodeposited metal for LIGA mold-making while the VC substrate practically eliminates secondary radiation effects that lead to delamination of the X-ray resist form the substrate, the loss of isolated resist features, and the formation of a resist layer adjacent to the substrate that is insoluble in the developer.

  3. An Investigation of the Accuracy of Alternative Methods of True Score Estimation in High-Stakes Mixed-Format Examinations.

    ERIC Educational Resources Information Center

    Klinger, Don A.; Rogers, W. Todd

    2003-01-01

    The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…

  4. Kinetic mechanism and fidelity of nick sealing by Escherichia coli NAD+-dependent DNA ligase (LigA)

    PubMed Central

    Chauleau, Mathieu; Shuman, Stewart

    2016-01-01

    Escherichia coli DNA ligase (EcoLigA) repairs 3′-OH/5′-PO4 nicks in duplex DNA via reaction of LigA with NAD+ to form a covalent LigA-(lysyl-Nζ)–AMP intermediate (step 1); transfer of AMP to the nick 5′-PO4 to form an AppDNA intermediate (step 2); and attack of the nick 3′-OH on AppDNA to form a 3′-5′ phosphodiester (step 3). A distinctive feature of EcoLigA is its stimulation by ammonium ion. Here we used rapid mix-quench methods to analyze the kinetic mechanism of single-turnover nick sealing by EcoLigA–AMP. For substrates with correctly base-paired 3′-OH/5′-PO4 nicks, kstep2 was fast (6.8–27 s−1) and similar to kstep3 (8.3–42 s−1). Absent ammonium, kstep2 and kstep3 were 48-fold and 16-fold slower, respectively. EcoLigA was exquisitely sensitive to 3′-OH base mispairs and 3′ N:abasic lesions, which elicited 1000- to >20000-fold decrements in kstep2. The exception was the non-canonical 3′ A:oxoG configuration, which EcoLigA accepted as correctly paired for rapid sealing. These results underscore: (i) how EcoLigA requires proper positioning of the nick 3′ nucleoside for catalysis of 5′ adenylylation; and (ii) EcoLigA's potential to embed mutations during the repair of oxidative damage. EcoLigA was relatively tolerant of 5′-phosphate base mispairs and 5′ N:abasic lesions. PMID:26857547

  5. Peer-Mediated vs. Individual Writing: Measuring Fluency, Complexity, and Accuracy in Writing

    ERIC Educational Resources Information Center

    Soleimani, Maryam; Modirkhamene, Sima; Sadeghi, Karim

    2017-01-01

    Drawing upon Vygotsky's Sociocultural Theory (SCT), this study aimed at investigating the effect of two writing modes, namely, peer-mediated/collaborative vs. individual writing on measures of fluency, accuracy, and complexity of female EFL learners' writing. Based on an in-house placement test and the First Certificate in English writing paper, a…

  6. A new theory for X-ray diffraction

    PubMed Central

    Fewster, Paul F.

    2014-01-01

    This article proposes a new theory of X-ray scattering that has particular relevance to powder diffraction. The underlying concept of this theory is that the scattering from a crystal or crystallite is distributed throughout space: this leads to the effect that enhanced scatter can be observed at the ‘Bragg position’ even if the ‘Bragg condition’ is not satisfied. The scatter from a single crystal or crystallite, in any fixed orientation, has the fascinating property of contributing simultaneously to many ‘Bragg positions’. It also explains why diffraction peaks are obtained from samples with very few crystallites, which cannot be explained with the conventional theory. The intensity ratios for an Si powder sample are predicted with greater accuracy and the temperature factors are more realistic. Another consequence is that this new theory predicts a reliability in the intensity measurements which agrees much more closely with experimental observations compared to conventional theory that is based on ‘Bragg-type’ scatter. The role of dynamical effects (extinction etc.) is discussed and how they are suppressed with diffuse scattering. An alternative explanation for the Lorentz factor is presented that is more general and based on the capture volume in diffraction space. This theory, when applied to the scattering from powders, will evaluate the full scattering profile, including peak widths and the ‘background’. The theory should provide an increased understanding of the reliability of powder diffraction measurements, and may also have wider implications for the analysis of powder diffraction data, by increasing the accuracy of intensities predicted from structural models. PMID:24815975

  7. A new theory for X-ray diffraction.

    PubMed

    Fewster, Paul F

    2014-05-01

    This article proposes a new theory of X-ray scattering that has particular relevance to powder diffraction. The underlying concept of this theory is that the scattering from a crystal or crystallite is distributed throughout space: this leads to the effect that enhanced scatter can be observed at the `Bragg position' even if the `Bragg condition' is not satisfied. The scatter from a single crystal or crystallite, in any fixed orientation, has the fascinating property of contributing simultaneously to many `Bragg positions'. It also explains why diffraction peaks are obtained from samples with very few crystallites, which cannot be explained with the conventional theory. The intensity ratios for an Si powder sample are predicted with greater accuracy and the temperature factors are more realistic. Another consequence is that this new theory predicts a reliability in the intensity measurements which agrees much more closely with experimental observations compared to conventional theory that is based on `Bragg-type' scatter. The role of dynamical effects (extinction etc.) is discussed and how they are suppressed with diffuse scattering. An alternative explanation for the Lorentz factor is presented that is more general and based on the capture volume in diffraction space. This theory, when applied to the scattering from powders, will evaluate the full scattering profile, including peak widths and the `background'. The theory should provide an increased understanding of the reliability of powder diffraction measurements, and may also have wider implications for the analysis of powder diffraction data, by increasing the accuracy of intensities predicted from structural models.

  8. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  9. A design of optical modulation system with pixel-level modulation accuracy

    NASA Astrophysics Data System (ADS)

    Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu

    2018-01-01

    Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.

  10. An Integrated Microfabricated Chip with Double Functions as an Ion Source and Air Pump Based on LIGA Technology

    PubMed Central

    Li, Hua; Jiang, Linxiu; Guo, Chaoqun; Zhu, Jianmin; Jiang, Yongrong; Chen, Zhencheng

    2017-01-01

    The injection and ionization of volatile organic compounds (VOA) by an integrated chip is experimentally analyzed in this paper. The integrated chip consists of a needle-to-cylinder electrode mounting on the Polymethyl Methacrylate (PMMA) substrate. The needle-to-cylinder electrode is designed and fabricated by Lithographie, Galvanoformung and Abformung (LIGA) technology. In this paper, the needle is connected to a negative power supply of −5 kV and used as the cathode; the cylinder electrodes are composed of two arrays of cylinders and serve as the anode. The ionic wind is produced based on corona and glow discharges of needle-to-cylinder electrodes. The experimental setup is designed to observe the properties of the needle-to-cylinder discharge and prove its functions as an ion source and air pump. In summary, the main results are as follows: (1) the ionic wind velocity produced by the chip is about 0.79 m/s at an applied voltage of −3300 V; (2) acetic acid and ammonia water can be injected through the chip, which is proved by pH test paper; and (3) the current measured by a Faraday cup is about 10 pA for acetic acid and ammonia with an applied voltage of −3185 V. The integrated chip is promising for portable analytical instruments, such as ion mobility spectrometry (IMS), field asymmetric ion mobility spectrometry (FAIMS), and mass spectrometry (MS). PMID:28054980

  11. A noncontact laser technique for circular contouring accuracy measurement

    NASA Astrophysics Data System (ADS)

    Wang, Charles; Griffin, Bob

    2001-02-01

    The worldwide competition in manufacturing frequently requires the high-speed machine tools to deliver contouring accuracy in the order of a few micrometers, while moving at relatively high feed rates. Traditional test equipment is rather limited in its capability to measure contours of small radius at high speed. Described here is a new noncontact laser measurement technique for the test of circular contouring accuracy. This technique is based on a single-aperture laser Doppler displacement meter with a flat mirror as the target. It is of a noncontact type with the ability to vary the circular path radius continuously at data rates of up to 1000 Hz. Using this instrument, the actual radius, feed rate, velocity, and acceleration profiles can also be determined. The basic theory of operation, the hardware setup, the data collection, the data processing, and the error budget are discussed.

  12. Error and Uncertainty in the Accuracy Assessment of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Sarmento, Pedro Alexandre Reis

    Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None

  13. Practical Issues in Estimating Classification Accuracy and Consistency with R Package cacIRT

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.

    2015-01-01

    There are two main lines of research in estimating classification accuracy (CA) and classification consistency (CC) under Item Response Theory (IRT). The R package cacIRT provides computer implementations of both approaches in an accessible and unified framework. Even with available implementations, there remains decisions a researcher faces when…

  14. The control of manual entry accuracy in management/engineering information systems, phase 1

    NASA Technical Reports Server (NTRS)

    Hays, Daniel; Nocke, Henry; Wilson, Harold; Woo, John, Jr.; Woo, June

    1987-01-01

    It was shown that clerical personnel can be tested for proofreading performance under simulated industrial conditions. A statistical study showed that errors in proofreading follow an extreme value probability theory. The study showed that innovative man/machine interfaces can be developed to improve and control accuracy during data entry.

  15. Are people excessive or judicious in their egocentrism? A modeling approach to understanding bias and accuracy in people's optimism.

    PubMed

    Windschitl, Paul D; Rose, Jason P; Stalkfleet, Michael T; Smith, Andrew R

    2008-08-01

    People are often egocentric when judging their likelihood of success in competitions, leading to overoptimism about winning when circumstances are generally easy and to overpessimism when the circumstances are difficult. Yet, egocentrism might be grounded in a rational tendency to favor highly reliable information (about the self) more so than less reliable information (about others). A general theory of probability called extended support theory was used to conceptualize and assess the role of egocentrism and its consequences for the accuracy of people's optimism in 3 competitions (Studies 1-3, respectively). Also, instructions were manipulated to test whether people who were urged to avoid egocentrism would show improved or worsened accuracy in their likelihood judgments. Egocentrism was found to have a potentially helpful effect on one form of accuracy, but people generally showed too much egocentrism. Debias instructions improved one form of accuracy but had no impact on another. The advantages of using the EST framework for studying optimism and other types of judgments (e.g., comparative ability judgments) are discussed. (c) 2008 APA, all rights reserved

  16. Determination of the QCD Λ Parameter and the Accuracy of Perturbation Theory at High Energies.

    PubMed

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2016-10-28

    We discuss the determination of the strong coupling α_{MS[over ¯]}(m_{Z}) or, equivalently, the QCD Λ parameter. Its determination requires the use of perturbation theory in α_{s}(μ) in some scheme s and at some energy scale μ. The higher the scale μ, the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ parameter in three-flavor QCD, we perform lattice computations in a scheme that allows us to nonperturbatively reach very high energies, corresponding to α_{s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a 3% error in the Λ parameter, while data around α_{s}≈0.2 are clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  17. "Utilizing" signal detection theory.

    PubMed

    Lynn, Spencer K; Barrett, Lisa Feldman

    2014-09-01

    What do inferring what a person is thinking or feeling, judging a defendant's guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, for which different responses are appropriate) and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial, we show how incorporating the economic concept of utility allows signal detection theory to serve as a model of optimal decision making, going beyond its common use as an analytic method. This utility approach to signal detection theory clarifies otherwise enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (an inverse relationship between bias magnitude and sensitivity optimizes utility). A "utilized" signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. © The Author(s) 2014.

  18. Accuracy of the QUAD4 thick shell element

    NASA Technical Reports Server (NTRS)

    Case, William R.; Bowles, Tiffany D.; Croft, Alicia K.; Mcginnis, Mark A.

    1990-01-01

    The accuracy of the relatively new QUAD4 thick shell element is assessed via comparison with a theoretical solution for thick homogeneous and honeycomb flat simply supported plates under the action of a uniform pressure load. The theoretical thick plate solution is based on the theory developed by Reissner and includes the effects of transverse shear flexibility which are not included in the thin plate solutions based on Kirchoff plate theory. In addition, the QUAD4 is assessed using a set of finite element test problems developed by the MacNeal-Schwendler Corp. (MSC). Comparison of the COSMIC QUAD4 element as well as those from MSC and Universal Analytics, Inc. (UAI) for these test problems is presented. The current COSMIC QUAD4 element is shown to have excellent comparison with both the theoretical solutions and also those from the two commercial versions of NASTRAN that it was compared to.

  19. Dynamics of Complexity and Accuracy: A Longitudinal Case Study of Advanced Untutored Development

    ERIC Educational Resources Information Center

    Polat, Brittany; Kim, Youjin

    2014-01-01

    This longitudinal case study follows a dynamic systems approach to investigate an under-studied research area in second language acquisition, the development of complexity and accuracy for an advanced untutored learner of English. Using the analytical tools of dynamic systems theory (Verspoor et al. 2011) within the framework of complexity,…

  20. Comparing the accuracy of perturbative and variational calculations for predicting fundamental vibrational frequencies of dihalomethanes

    NASA Astrophysics Data System (ADS)

    Krasnoshchekov, Sergey V.; Schutski, Roman S.; Craig, Norman C.; Sibaev, Marat; Crittenden, Deborah L.

    2018-02-01

    Three dihalogenated methane derivatives (CH2F2, CH2FCl, and CH2Cl2) were used as model systems to compare and assess the accuracy of two different approaches for predicting observed fundamental frequencies: canonical operator Van Vleck vibrational perturbation theory (CVPT) and vibrational configuration interaction (VCI). For convenience and consistency, both methods employ the Watson Hamiltonian in rectilinear normal coordinates, expanding the potential energy surface (PES) as a Taylor series about equilibrium and constructing the wavefunction from a harmonic oscillator product basis. At the highest levels of theory considered here, fourth-order CVPT and VCI in a harmonic oscillator basis with up to 10 quanta of vibrational excitation in conjunction with a 4-mode representation sextic force field (SFF-4MR) computed at MP2/cc-pVTZ with replacement CCSD(T)/aug-cc-pVQZ harmonic force constants, the agreement between computed fundamentals is closer to 0.3 cm-1 on average, with a maximum difference of 1.7 cm-1. The major remaining accuracy-limiting factors are the accuracy of the underlying electronic structure model, followed by the incompleteness of the PES expansion. Nonetheless, computed and experimental fundamentals agree to within 5 cm-1, with an average difference of 2 cm-1, confirming the utility and accuracy of both theoretical models. One exception to this rule is the formally IR-inactive but weakly allowed through Coriolis-coupling H-C-H out-of-plane twisting mode of dichloromethane, whose spectrum we therefore revisit and reassign. We also investigate convergence with respect to order of CVPT, VCI excitation level, and order of PES expansion, concluding that premature truncation substantially decreases accuracy, although VCI(6)/SFF-4MR results are still of acceptable accuracy, and some error cancellation is observed with CVPT2 using a quartic force field.

  1. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  2. Accuracy of Petermann's K-factor in the theory of semiconductor lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Mashade, M.B.; Arnaud, J.

    1986-04-01

    Petermann has proposed that the classical formula for the linewidth of a laser be multiplied by a factor K >> 1 in the case of gain-guided semiconductor lasers. The concept of power in the mode used by that author, however, is not well defined in a waveguide with gain, and his theory is therefore opened to question. The analysis given here avoids this difficulty and nevertheless agrees with Petermann's result. This is because spatial mode filtering is strong in oscillating lasers.

  3. Ligatoxin B, a new cytotoxic protein with a novel helix-turn-helix DNA-binding domain from the mistletoe Phoradendron liga.

    PubMed Central

    Li, Shi-Sheng; Gullbo, Joachim; Lindholm, Petra; Larsson, Rolf; Thunberg, Eva; Samuelsson, Gunnar; Bohlin, Lars; Claeson, Per

    2002-01-01

    A new basic protein, designated ligatoxin B, containing 46 amino acid residues has been isolated from the mistletoe Phoradendron liga (Gill.) Eichl. (Viscaceae). The protein's primary structure, determined unambiguously using a combination of automated Edman degradation, trypsin enzymic digestion, and tandem MS analysis, was 1-KSCCPSTTAR-NIYNTCRLTG-ASRSVCASLS-GCKIISGSTC-DSGWNH-46. Ligatoxin B exhibited in vitro cytotoxic activities on the human lymphoma cell line U-937-GTB and the primary multidrug-resistant renal adenocarcinoma cell line ACHN, with IC50 values of 1.8 microM and 3.2 microM respectively. Sequence alignment with other thionins identified a new member of the class 3 thionins, ligatoxin B, which is similar to the earlier described ligatoxin A. As predicted by the method of homology modelling, ligatoxin B shares a three-dimensional structure with the viscotoxins and purothionins and so may have the same mode of cytotoxic action. The novel similarities observed by structural comparison of the helix-turn-helix (HTH) motifs of the thionins, including ligatoxin B, and the HTH DNA-binding proteins, led us to propose the working hypothesis that thionins represent a new group of DNA-binding proteins. This working hypothesis could be useful in further dissecting the molecular mechanisms of thionin cytotoxicity and of thionin opposition to multidrug resistance, and useful in clarifying the physiological function of thionins in plants. PMID:12049612

  4. Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.

    PubMed

    Glöckner, Andreas; Pachur, Thorsten

    2012-04-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Micromolding of polymer waveguides

    NASA Astrophysics Data System (ADS)

    Hanemann, Thomas; Ulrich, Hermann; Ruprecht, Robert; Hausselt, Juergen H.

    1999-10-01

    In microsystem technology the fabrication of either passive or active micro optical components made from polymers becomes more and more evident with respect to the intense expanding application possibilities e.g. in telecommunication. Actually, the LIGA process developed at the FZK, Germany allows the direct fabrication of microcomponents with lateral dimensions in the micrometer range, structural details in the submicrometer range, high aspect ratios of up to several hundreds and a final average surface roughness of less than 50 nm in small up to large scales. The molding of polymer components for microoptical applications, especially in the singlemode range, is determined by the achievable maximum accuracy of the molding technique itself and of the acceptable tolerances for low damping and coupling losses. Following the LIGA and related technique e.g. mechanical microengineering we want to present in this work the fabrication of polymer singlemode waveguides using a combination of micromolding and light- curing steps.

  6. Accuracy and Availability of Egnos - Results of Observations

    NASA Astrophysics Data System (ADS)

    Felski, Andrzej; Nowak, Aleksander; Woźniak, Tomasz

    2011-01-01

    According to SBAS concept the user should receive timely the correct information about the system integrity and corrections to the pseudoranges measurements, which leads to better accuracy of coordinates. In theory the whole system is permanently monitored by RIMS stations, so it is impossible to deliver the faulty information to the user. The quality of the system is guaranteed inside the border of the system coverage however in the east part of Poland lower accuracy and availability of the system is still observed. This was the impulse to start an observation and analysis of real accuracy and availability of EGNOS service in the context of support air-operations in local airports and as the supplementation in hydrographic operations on the Polish Exclusive Zone. A registration has been conducted on three PANSA stations situated on airports in Warsaw, Krakow and Rzeszow and on PNA station in Gdynia. Measurements on PANSA stations have been completed permanently during each whole month up to end of September 2011. These stations are established on Septentrio PolaRx2e receivers and have been engaged into EGNOS Data Collection Network performed by EUROCONTROL. The advantage of these registrations is the uniformity of receivers. Apart from these registrations additional measurements in Gdynia have been provided with different receivers, mainly dedicated sea-navigation: CSI Wireless 1, NOVATEL OEMV, Sperry Navistar, Crescent V-100 and R110 as well as Magellan FX420. The main object of analyses was the accuracy and availability of EGNOS service in each point and for different receivers. Accuracy has been analyzed separately for each coordinate. Finally the temporarily and spatial correlations of coordinates, its availability and accuracy has been investigated. The findings prove that present accuracy of EGNOS service is about 1,5m (95%), but availability of the service is controversial. The accuracy of present EGNOS service meets the parameters of APV I and even APV II

  7. No Special K! A Signal Detection Framework for the Strategic Regulation of Memory Accuracy

    ERIC Educational Resources Information Center

    Higham, Philip A.

    2007-01-01

    Two experiments investigated criterion setting and metacognitive processes underlying the strategic regulation of accuracy on the Scholastic Aptitude Test (SAT) using Type-2 signal detection theory (SDT). In Experiment 1, report bias was manipulated by penalizing participants either 0.25 (low incentive) or 4 (high incentive) points for each error.…

  8. fMRI evidence for a dual process account of the speed-accuracy tradeoff in decision-making.

    PubMed

    Ivanoff, Jason; Branning, Philip; Marois, René

    2008-07-09

    The speed and accuracy of decision-making have a well-known trading relationship: hasty decisions are more prone to errors while careful, accurate judgments take more time. Despite the pervasiveness of this speed-accuracy trade-off (SAT) in decision-making, its neural basis is still unknown. Using functional magnetic resonance imaging (fMRI) we show that emphasizing the speed of a perceptual decision at the expense of its accuracy lowers the amount of evidence-related activity in lateral prefrontal cortex. Moreover, this speed-accuracy difference in lateral prefrontal cortex activity correlates with the speed-accuracy difference in the decision criterion metric of signal detection theory. We also show that the same instructions increase baseline activity in a dorso-medial cortical area involved in the internal generation of actions. These findings suggest that the SAT is neurally implemented by modulating not only the amount of externally-derived sensory evidence used to make a decision, but also the internal urge to make a response. We propose that these processes combine to control the temporal dynamics of the speed-accuracy trade-off in decision-making.

  9. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  10. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. “UTILIZING” SIGNAL DETECTION THEORY

    PubMed Central

    Lynn, Spencer K.; Barrett, Lisa Feldman

    2014-01-01

    What do inferring what a person is thinking or feeling, deciding to report a symptom to your doctor, judging a defendant’s guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, which engender different appropriate responses), and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial we show how, by incorporating the economic concept of utility, signal detection theory serves as a model of optimal decision making, beyond its common use as an analytic method. This utility approach to signal detection theory highlights potentially enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (a functional relationship between bias and sensitivity). A “utilized” signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. PMID:25097061

  12. Accuracy of gravitational physics tests using ranges to the inner planets

    NASA Technical Reports Server (NTRS)

    Ashby, N.; Bender, P.

    1981-01-01

    A number of different types of deviations from Kepler's laws for planetary orbits can occur in nonNewtonian metric gravitational theories. These include secular changes in all of the orbital elements and in the mean motion, plus additional periodic perturbations in the coordinates. The first order corrections to the Keplerian motion of a single planet around the Sun due to the parameterized post Newtonian theory parameters were calculated as well as the corrections due to the solar quadrupole moment and a possible secular change in the gravitational constant. The results were applied to the case of proposed high accuracy ranging experiments from the Earth to a Mercury orbiting spacecraft in order to see how well the various parameters can be determined.

  13. Recognition memory and awareness: A high-frequency advantage in the accuracy of knowing.

    PubMed

    Gregg, Vernon H; Gardiner, John M; Karayianni, Irene; Konstantinou, Ira

    2006-04-01

    The well-established advantage of low-frequency words over high-frequency words in recognition memory has been found to occur in remembering and not knowing. Two experiments employed remember and know judgements, and divided attention to investigate the possibility of an effect of word frequency on know responses given appropriate study conditions. With undivided attention at study, the usual low-frequency advantage in the accuracy of remember responses, but no effect on know responses, was obtained. Under a demanding divided attention task at encoding, a high-frequency advantage in the accuracy of know responses was obtained. The results are discussed in relation to theories of knowing, particularly those incorporating perceptual and conceptual fluency.

  14. Improving the accuracy of Density Functional Theory (DFT) calculation for homolysis bond dissociation energies of Y-NO bond: generalized regression neural network based on grey relational analysis and principal component analysis.

    PubMed

    Li, Hong Zhi; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2011-01-01

    We propose a generalized regression neural network (GRNN) approach based on grey relational analysis (GRA) and principal component analysis (PCA) (GP-GRNN) to improve the accuracy of density functional theory (DFT) calculation for homolysis bond dissociation energies (BDE) of Y-NO bond. As a demonstration, this combined quantum chemistry calculation with the GP-GRNN approach has been applied to evaluate the homolysis BDE of 92 Y-NO organic molecules. The results show that the ull-descriptor GRNN without GRA and PCA (F-GRNN) and with GRA (G-GRNN) approaches reduce the root-mean-square (RMS) of the calculated homolysis BDE of 92 organic molecules from 5.31 to 0.49 and 0.39 kcal mol(-1) for the B3LYP/6-31G (d) calculation. Then the newly developed GP-GRNN approach further reduces the RMS to 0.31 kcal mol(-1). Thus, the GP-GRNN correction on top of B3LYP/6-31G (d) can improve the accuracy of calculating the homolysis BDE in quantum chemistry and can predict homolysis BDE which cannot be obtained experimentally.

  15. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  16. High accuracy acoustic relative humidity measurement in duct flow with air.

    PubMed

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  17. Incorporation of Half-Cycle Theory Into Ko Aging Theory for Aerostructural Flight-Life Predictions

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Tran, Van T.; Chen, Tony

    2007-01-01

    The half-cycle crack growth theory was incorporated into the Ko closed-form aging theory to improve accuracy in the predictions of operational flight life of failure-critical aerostructural components. A new crack growth computer program was written for reading the maximum and minimum loads of each half-cycle from the random loading spectra for crack growth calculations and generation of in-flight crack growth curves. The unified theories were then applied to calculate the number of flights (operational life) permitted for B-52B pylon hooks and Pegasus adapter pylon hooks to carry the Hyper-X launching vehicle that air launches the X-43 Hyper-X research vehicle. A crack growth curve for each hook was generated for visual observation of the crack growth behavior during the entire air-launching or captive flight. It was found that taxiing and the takeoff run induced a major portion of the total crack growth per flight. The operational life theory presented can be applied to estimate the service life of any failure-critical structural components.

  18. Accuracy of Range Restriction Correction with Multiple Imputation in Small and Moderate Samples: A Simulation Study

    ERIC Educational Resources Information Center

    Pfaffel, Andreas; Spiel, Christiane

    2016-01-01

    Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…

  19. A proposal for limited criminal liability in high-accuracy endoscopic sinus surgery.

    PubMed

    Voultsos, P; Casini, M; Ricci, G; Tambone, V; Midolo, E; Spagnolo, A G

    2017-02-01

    The aim of the present study is to propose legal reform limiting surgeons' criminal liability in high-accuracy and high-risk surgery such as endoscopic sinus surgery (ESS). The study includes a review of the medical literature, focusing on identifying and examining reasons why ESS carries a very high risk of serious complications related to inaccurate surgical manoeuvers and reviewing British and Italian legal theory and case-law on medical negligence, especially with regard to Italian Law 189/2012 (so called "Balduzzi" Law). It was found that serious complications due to inaccurate surgical manoeuvers may occur in ESS regardless of the skill, experience and prudence/diligence of the surgeon. Subjectivity should be essential to medical negligence, especially regarding high-accuracy surgery. Italian Law 189/2012 represents a good basis for the limitation of criminal liability resulting from inaccurate manoeuvres in high-accuracy surgery such as ESS. It is concluded that ESS surgeons should be relieved of criminal liability in cases of simple/ordinary negligence where guidelines have been observed. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  20. Comparison of high-angle-of-attack slender-body theory and exact solutions for potential flow over an ellipsoid

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    1990-01-01

    The accuracy of high-alpha slender-body theory (HASBT) for bodies with elliptical cross-sections is presently demonstrated by means of a comparison with exact solutions for incompressible potential flow over a wide range of ellipsoid geometries and angles of attack and sideslip. The addition of the appropriate trigonometric coefficients to the classical slender-body theory decomposition yields the formally correct HASBT, and results in accuracies previously considered unattainable.

  1. Comparisons between geometrical optics and Lorenz-Mie theory

    NASA Technical Reports Server (NTRS)

    Ungut, A.; Grehan, G.; Gouesbet, G.

    1981-01-01

    Both the Lorenz-Mie and geometrical optics theories are used in calculating the scattered light patterns produced by transparent spherical particles over a wide range of diameters, between 1.0 and 100 microns, and for the range of forward scattering angles from zero to 20 deg. A detailed comparison of the results shows the greater accuracy of the geometrical optics theory in the forward direction. Emphasis is given to the simultaneous sizing and velocimetry of particles by means of pedestal calibration methods.

  2. Physical Accuracy of Q Models of Seismic Attenuation

    NASA Astrophysics Data System (ADS)

    Morozov, I. B.

    2016-12-01

    Accuracy of theoretical models is a required prerequisite for any type of seismic imaging and interpretation. Among all geophysical disciplines, the theory of seismic and tidal attenuation is the least developed, and most practical studies use viscoelastic models based on empirical Q factors. To simplify imaging and inversions, the Qs are often approximated as frequency-independent or following a power law with frequency. However, simplicity of inversion should not outweigh the problematic physical accuracy of such models. Typical images of spatially-variable crustal and mantle Qs are "apparent," analogously to pseudo-depth, apparent-resistivity images in electrical imaging. Problems with Q models can be seen from controversial general observations present in many studies; for example: 1) In global Q models, bulk attenuation is much lower than the shear one throughout the whole Earth. This is considered a fundamental relation for the Earth; nevertheless, it is also very peculiar physically and suggests a negative Q for the Lamé modulus. This relation is also not supported by most first-principle models of materials and laboratory studies. 2) The Q parameterization requires that the entire outer core of the Earth is assigned zero attenuation, despite its large volume, presence of viscosity and shear deformation in free oscillations. 3) In laboratory and surface-wave studies, the bulk and shear Qs can be different for different wave modes, different sample sizes boundary conditions on the surface. Similarly, the Qs measured from body-S, Love, Lg, or ScS waves may not equal each other. 4) In seismic coda studies, the Q is often found to be linearly (or even faster) increasing with frequency. Such character of energy dissipation is controversial physically, but can be readily explained as an artifact of inaccurately-known geometrical spreading. To overcome the physical inaccuracies and apparent character of seismic attenuation models, mechanical theories of materials

  3. First-Principles pH Theory

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Hyun; Zhang, S. B.

    2006-03-01

    Despite being one of the most important macroscopic measures and a long history even before the quantum mechanics, the concept of pH has rarely been mentioned in microscopic theories, nor being incorporated computationally into first-principles theory of aqueous solutions. Here, we formulate a theory for the pH dependence of solution formation energy by introducing the proton chemical potential as the microscopic counterpart of pH in atomistic solution models. Within the theory, the general acid-base chemistry can be cast in a simple pictorial representation. We adopt density-functional molecular dynamics to demonstrate the usefulness of the method by studying a number of solution systems including water, small solute molecules such as NH3 and HCOOH, and more complex amino acids with several functional groups. For pure water, we calculated the auto- ionization constant to be 13.2 with a 95 % accuracy. For other solutes, the calculated dissociation constants, i.e., the so- called pKa, are also in reasonable agreement with experiments. Our first-principles pH theory can be readily applied to broad solution chemistry problems such as redox reactions.

  4. Consistency problems associated to the improvement of precession-nutation theories

    NASA Astrophysics Data System (ADS)

    Ferrandiz, J. M.; Escapa, A.; Baenas, T.; Getino, J.; Navarro, J. F.; Belda, S.

    2014-12-01

    The complexity of the modelling of the rotational motion of the Earth in space has produced that no single theory has been adopted to describe it in full. Hence, it is customary using at least a theory for precession and another one for nutation. The classic approach proceeds by deriving some of the fundamentals parameters from the precession theory at hand, like, e.g. the dynamical ellipticity H, and then using that valuesin the nutation theory. The former IAU1976 precession and IAU1980 nutation theories followed that scheme. Along with the improvement of the accuracy of the determination of EOP (Earth orientation parameters), IAU1980 was superseded by IAU2000, based on the application of the MHB2000 (Mathews et al 2002) transfer function to the previous rigid earth analytical theory REN2000 (Souchay et al 1999). The latter was derived while the precession model IAU1976 was still in force therefore it used the corresponding values for some of the fundamental parameters, as the precession rate, associated to the dynamical ellipticity, and the obliquity of the ecliptic at the reference epoch. The new precession model P03 was adopted as IAU2006. That change introduced some inconsistency since P03 used different values for some of the fundamental parameters that MHB2000 inherited from REN2000. Besides, the derivation of the basic earth parameters of MHB2000 itself comprised a fitted variation of the dynamical ellipticity adopted in the background rigid theory. Due to the strict requirements of accuracy of the present and coming times, the magnitude of the inconsistencies originated by this two-fold approach is no longer negligible as earlier. Some corrections have been proposed by Capitaine et al (2005) and Escapa et al (2014) in order to reach a better level of consistency between precession and nutation theories and parameters. In this presentation we revisit the problem taking into account some of the advances in precession theory not accounted for yet, stemming

  5. Potential theory of radiation

    NASA Technical Reports Server (NTRS)

    Chiu, Huei-Huang

    1989-01-01

    A theoretical method is being developed by which the structure of a radiation field can be predicted by a radiation potential theory, similar to a classical potential theory. The introduction of a scalar potential is justified on the grounds that the spectral intensity vector is irrotational. The vector is also solenoidal in the limits of a radiation field in complete radiative equilibrium or in a vacuum. This method provides an exact, elliptic type equation that will upgrade the accuracy and the efficiency of the current CFD programs required for the prediction of radiation and flow fields. A number of interesting results emerge from the present study. First, a steady state radiation field exhibits an optically modulated inverse square law distribution character. Secondly, the unsteady radiation field is structured with two conjugate scalar potentials. Each is governed by a Klein-Gordon equation with a frictional force and a restoring force. This steady potential field structure and the propagation of radiation potentials are consistent with the well known results of classical electromagnetic theory. The extension of the radiation potential theory for spray combustion and hypersonic flow is also recommended.

  6. Stability and accuracy of metamemory in adulthood and aging: a longitudinal analysis.

    PubMed

    McDonald-Miszczak, L; Hertzog, C; Hultsch, D F

    1995-12-01

    The stability and accuracy of memory perceptions in 2 longitudinal samples was examined. Sample 1 consisted of 231 adults (22-78 years) tested twice over 2 years. Sample 2 consisted of 234 adults (55-86 years) tested 3 times over 6 years. Measures of perceived and actual memory change were obtained. A primary focus was whether perceptions of memory change stem from application of an implicit theory about aging and memory or from accurate monitoring of actual changes in performance. Individual differences in metamemory were highly stable over time. Results suggested at least some accurate monitoring of memory in Sample 2, in which actual change was greatest. However the overall pattern of results is largely consistent with predictions derived from an implicit theory hypothesis.

  7. How localized is ``local?'' Efficiency vs. accuracy of O(N) domain decomposition in local orbital based all-electron electronic structure theory

    NASA Astrophysics Data System (ADS)

    Havu, Vile; Blum, Volker; Scheffler, Matthias

    2007-03-01

    Numeric atom-centered local orbitals (NAO) are efficient basis sets for all-electron electronic structure theory. The locality of NAO's can be exploited to render (in principle) all operations of the self-consistency cycle O(N). This is straightforward for 3D integrals using domain decomposition into spatially close subsets of integration points, enabling critical computational savings that are effective from ˜tens of atoms (no significant overhead for smaller systems) and make large systems (100s of atoms) computationally feasible. Using a new all-electron NAO-based code,^1 we investigate the quantitative impact of exploiting this locality on two distinct classes of systems: Large light-element molecules [Alanine-based polypeptide chains (Ala)n], and compact transition metal clusters. Strict NAO locality is achieved by imposing a cutoff potential with an onset radius rc, and exploited by appropriately shaped integration domains (subsets of integration points). Conventional tight rc<= 3å have no measurable accuracy impact in (Ala)n, but introduce inaccuracies of 20-30 meV/atom in Cun. The domain shape impacts the computational effort by only 10-20 % for reasonable rc. ^1 V. Blum, R. Gehrke, P. Havu, V. Havu, M. Scheffler, The FHI Ab Initio Molecular Simulations (aims) Project, Fritz-Haber-Institut, Berlin (2006).

  8. An integrated theory of attention and decision making in visual signal detection.

    PubMed

    Smith, Philip L; Ratcliff, Roger

    2009-04-01

    The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in this task is described. The theory links visual encoding, masking, spatial attention, visual short-term memory (VSTM), and perceptual decision making in an integrated dynamic framework. The theory assumes that decisions are made by a diffusion process driven by a neurally plausible, shunting VSTM. The VSTM trace encodes the transient outputs of early visual filters in a durable form that is preserved for the time needed to make a decision. Attention increases the efficiency of VSTM encoding, either by increasing the rate of trace formation or by reducing the delay before trace formation begins. The theory provides a detailed, quantitative account of attentional effects in spatial cuing tasks at the level of response accuracy and the response time distributions. (c) 2009 APA, all rights reserved

  9. Application of Mensuration Technology to Improve the Accuracy of Field Artillery Firing Unit Location

    DTIC Science & Technology

    2013-12-13

    8 U.S. Army Field Artillery Operations ............................................................................ 8 Geodesy ...Experts in this field of study have a full working knowledge of geodesy and the theory that allows mensuration to surpass the level of accuracy achieved...desired. (2) Fire that is intended to achieve the desired result on target.”6 Geodesy : “that branch of applied mathematics which determines by observation

  10. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  11. MN15-L: A New Local Exchange-Correlation Functional for Kohn-Sham Density Functional Theory with Broad Accuracy for Atoms, Molecules, and Solids.

    PubMed

    Yu, Haoyu S; He, Xiao; Truhlar, Donald G

    2016-03-08

    Kohn-Sham density functional theory is widely used for applications of electronic structure theory in chemistry, materials science, and condensed-matter physics, but the accuracy depends on the quality of the exchange-correlation functional. Here, we present a new local exchange-correlation functional called MN15-L that predicts accurate results for a broad range of molecular and solid-state properties including main-group bond energies, transition metal bond energies, reaction barrier heights, noncovalent interactions, atomic excitation energies, ionization potentials, electron affinities, total atomic energies, hydrocarbon thermochemistry, and lattice constants of solids. The MN15-L functional has the same mathematical form as a previous meta-nonseparable gradient approximation exchange-correlation functional, MN12-L, but it is improved because we optimized it against a larger database, designated 2015A, and included smoothness restraints; the optimization has a much better representation of transition metals. The mean unsigned error on 422 chemical energies is 2.32 kcal/mol, which is the best among all tested functionals, with or without nonlocal exchange. The MN15-L functional also provides good results for test sets that are outside the training set. A key issue is that the functional is local (no nonlocal exchange or nonlocal correlation), which makes it relatively economical for treating large and complex systems and solids. Another key advantage is that medium-range correlation energy is built in so that one does not need to add damped dispersion by molecular mechanics in order to predict accurate noncovalent binding energies. We believe that the MN15-L functional should be useful for a wide variety of applications in chemistry, physics, materials science, and molecular biology.

  12. High Accuracy Acoustic Relative Humidity Measurement in Duct Flow with Air

    PubMed Central

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments. PMID:22163610

  13. Effects of the Presence of Audio and Type of Game Controller on Learning of Rhythmic Accuracy

    ERIC Educational Resources Information Center

    Thomas, James William

    2017-01-01

    "Guitar Hero III" and similar games potentially offer a vehicle for improvement of musical rhythmic accuracy with training delivered in both visual and auditory formats and by use of its novel guitar-shaped interface; however, some theories regarding multimedia learning suggest sound is a possible source of extraneous cognitive load…

  14. A Theory and Experiments for Detecting Shock Locations

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Johnson, D. K.; Adamovsky, G.

    1994-01-01

    In this paper we present a simplified one-dimensional theory for predicting locations of normal shocks in a converging diverging nozzle. The theory assumes that the flow is quasi one-dimensional and the flow is accelerated in the throat area. Optical aspects of the model consider propagation of electromagnetic fields transverse to the shock front. The theory consists of an inverse problem in which from the measured intensity it reconstructs an index of refraction profile for the shock. From this profile and the Dale-Gladstone relation, the density in the flow field is determined, thus determining the shock location. Experiments show agreement with the theory. In particular the location is determined within 10 percent of accuracy. Both the theoretical as well as the experimental results are presented to validate the procedures in this work.

  15. Empathic Accuracy in Adolescents with Autism Spectrum Disorders and Adolescents with Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Demurie, Ellen; De Corel, Maaike; Roeyers, Herbert

    2011-01-01

    In research on theory of mind (ToM) in individuals with an autism spectrum disorder (ASD) mainly static mind-reading tasks were used. In this study both a static (Eyes Test) and a more naturalistic (empathic accuracy task) ToM measure were used to investigate the perspective taking abilities of adolescents with ASD (n = 13), adolescents with…

  16. Results of Li-Tho trial: a prospective randomized study on effectiveness of LigaSure® in lung resections.

    PubMed

    Bertolaccini, Luca; Viti, Andrea; Cavallo, Antonio; Terzi, Alberto

    2014-04-01

    The role of electro-thermal bipolar tissue sealing system (LigaSure(®), (LS); Covidien, Inc., CO, USA) in thoracic surgery is still undefined. Reports of its use are still limited. The objective of the trial was to evaluate the cost and benefits of LS in major lung resection surgery. A randomized blinded study of a consecutive series of 100 patients undergoing lobectomy was undertaken. After muscle-sparing thoracotomy and classification of lung fissures according to Craig-Walker, patients with fissure Grade 2-4 were randomized to Stapler group or LS group fissure completion. Recorded parameters were analysed for differences in selected intraoperative and postoperative outcomes. Statistical analysis was performed with the bootstrap method. Pearson's χ(2) test and Fisher's exact test were used to calculate probability value for dichotomous variables comparison. Cost-benefit evaluation was performed using Pareto optimal analysis. There were no significant differences between groups, regarding demographic and baseline characteristics. No patient was withdrawn from the study; no adverse effect was recorded. There was no mortality or major complications in both groups. There were no statistically significant differences as to operative time or morbidity between patients in the LS group compared with the Stapler group. In the LS group, there was a not statistically significant increase of postoperative air leaks in the first 24 postoperative hours, while a statistically significant increase of drainage amount was observed in the LS group. No statistically significant difference in hospital length of stay was observed. Overall, the LS group had a favourable multi-criteria analysis of cost/benefit ratio with a good 'Pareto optimum'. LS is a safe device for thoracic surgery and can be a valid alternative to Staplers. In this setting, LS allows functional lung tissue preservation. As to costs, LS seems equivalent to Staplers.

  17. Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.

    PubMed

    Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G

    2012-11-13

    The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.

  18. Refined Zigzag Theory for Laminated Composite and Sandwich Plates

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco

    2009-01-01

    A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.

  19. Thermodynamics and proton activities of protic ionic liquids with quantum cluster equilibrium theory

    NASA Astrophysics Data System (ADS)

    Ingenmey, Johannes; von Domaros, Michael; Perlt, Eva; Verevkin, Sergey P.; Kirchner, Barbara

    2018-05-01

    We applied the binary Quantum Cluster Equilibrium (bQCE) method to a number of alkylammonium-based protic ionic liquids in order to predict boiling points, vaporization enthalpies, and proton activities. The theory combines statistical thermodynamics of van-der-Waals-type clusters with ab initio quantum chemistry and yields the partition functions (and associated thermodynamic potentials) of binary mixtures over a wide range of thermodynamic phase points. Unlike conventional cluster approaches that are limited to the prediction of thermodynamic properties, dissociation reactions can be effortlessly included into the bQCE formalism, giving access to ionicities, as well. The method is open to quantum chemical methods at any level of theory, but combination with low-cost composite density functional theory methods and the proposed systematic approach to generate cluster sets provides a computationally inexpensive and mostly parameter-free way to predict such properties at good-to-excellent accuracy. Boiling points can be predicted within an accuracy of 50 K, reaching excellent accuracy for ethylammonium nitrate. Vaporization enthalpies are predicted within an accuracy of 20 kJ mol-1 and can be systematically interpreted on a molecular level. We present the first theoretical approach to predict proton activities in protic ionic liquids, with results fitting well into the experimentally observed correlation. Furthermore, enthalpies of vaporization were measured experimentally for some alkylammonium nitrates and an excellent linear correlation with vaporization enthalpies of their respective parent amines is observed.

  20. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., measurement accuracy, and cut-off. 53.53 Section 53.53 Protection of Environment ENVIRONMENTAL PROTECTION..., measurement accuracy, and cut-off. (a) Overview. This test procedure is designed to evaluate a candidate... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The...

  1. Rotary-wing aerodynamics. Volume 1: Basic theories of rotor aerodynamics with application to helicopters. [momentum, vortices, and potential theory

    NASA Technical Reports Server (NTRS)

    Stepniewski, W. Z.

    1979-01-01

    The concept of rotary-wing aircraft in general is defined. The energy effectiveness of helicopters is compared with that of other static thrust generators in hover, as well as with various air and ground vehicles in forward translation. The most important aspects of rotor-blade dynamics and rotor control are reviewed. The simple physicomathematical model of the rotor offered by the momentum theory is introduced and its usefulness and limitations are assessed. The combined blade-element and momentum theory approach, which provides greater accuracy in performance predictions, is described as well as the vortex theory which models a rotor blade by means of a vortex filament or vorticity surface. The application of the velocity and acceleration potential theory to the determination of flow fields around three dimensional, non-rotating bodies as well as to rotor aerodynamic problems is described. Airfoil sections suitable for rotors are also considered.

  2. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  3. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  4. A non-asymptotic homogenization theory for periodic electromagnetic structures.

    PubMed

    Tsukerman, Igor; Markel, Vadim A

    2014-08-08

    Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions.

  5. Endochronic theory of transient creep and creep recovery

    NASA Technical Reports Server (NTRS)

    Wu, H. C.; Chen, L.

    1979-01-01

    Short time creep and creep recovery were investigated by means of the endochronic theory of viscoplasticity. It is shown that the constitutive equations for constant-strain-rate stress-strain behavior, creep, creep recovery, and stress relaxation can all ber derived from the general constitutive equation by imposing appropriate constraints. In this unified approach, the effect of strain-hardening is naturally accounted for when describing creep and creep recovery. The theory predicts with reasonable accuracy the creep and creep recovery behaviors for Aluminum 1100-0 at 150 C. It was found that the strain-rate history at prestraining stage affects the subsequent creep. A critical stress was also established for creep recovery. The theory predicts a forward creep for creep recovery stress greater than the critical stress. For creep recovery stress less than the critical stress, the theory then predicts a normal strain recovery.

  6. Determination of the conversion gain and the accuracy of its measurement for detector elements and arrays

    NASA Astrophysics Data System (ADS)

    Beecken, B. P.; Fossum, E. R.

    1996-07-01

    Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.

  7. Improving orbit prediction accuracy through supervised machine learning

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  8. Accuracy of megavolt radiation dosimetry using thermoluminescent lithium fluoride.

    PubMed

    Rudén, B I; Bengtsson, L G

    1977-04-01

    The relative light output per Gy in polystyrene for roentgen beams of 6 and 42 MV and electrons between 2.2 and 34.5 MeV relative to 60Co gamma radiation is reported for different kinds of LiF dosemeters. The distribution of the absorbed dose inside a 0.25 and 0.4 mm thick LiF-teflon disc surrounded by polystyrene and irradiated with 60Co, 42 MV roentgen radiation and 39 MeV electrons was measured using 0.01 and 0.02 mm thick Lif-teflon discs. The measurements show that the absorbed dose distribution in the dosemeter depends on the energy of the radiation. When flat dosemeters were used, differences between the signals measured at the two orientations possible during read-out could easily amount to several per cent, and for this reason 0.4 mm and 0.5 mm LiF-Teflon discs were not trusted when the highest accuracy was required. The cavity theory by Burlin does not account for the phenomena caused by differences in electron scattering properties of the dosemeter and the phantom material. Some suggestions are presented for a different cavity theory for flat dosemeters dealing also with these phenomena. It describes the results to about the same degree of approximation as the Burlin theory, and fails to explain the observed energy dependence for electrons.

  9. Recall Latencies, Confidence, and Output Positions of True and False Memories: Implications for Recall and Metamemory Theories

    ERIC Educational Resources Information Center

    Jou, Jerwen

    2008-01-01

    Recall latency, recall accuracy rate, and recall confidence were examined in free recall as a function of recall output serial position using a modified Deese-Roediger-McDermott paradigm to test a strength-based theory against the dual-retrieval process theory of recall output sequence. The strength theory predicts the item output sequence to be…

  10. Theory of Mind, Inhibitory Control, and Preschool-Age Children's Suggestibility in Different Interviewing Contexts

    ERIC Educational Resources Information Center

    Scullin, Matthew H.; Bonner, Karri

    2006-01-01

    The current study examined the relations among 3- to 5-year-olds' theory of mind, inhibitory control, and three measures of suggestibility: yielding to suggestive questions (yield), shifting answers in response to negative feedback (shift), and accuracy in response to misleading questions during a pressured interview about a live event. Theory of…

  11. Towards an exact correlated orbital theory for electrons

    NASA Astrophysics Data System (ADS)

    Bartlett, Rodney J.

    2009-12-01

    The formal and computational attraction of effective one-particle theories like Hartree-Fock and density functional theory raise the question of how far such approaches can be taken to offer exact results for selected properties of electrons in atoms, molecules, and solids. Some properties can be exactly described within an effective one-particle theory, like principal ionization potentials and electron affinities. This fact can be used to develop equations for a correlated orbital theory (COT) that guarantees a correct one-particle energy spectrum. They are built upon a coupled-cluster based frequency independent self-energy operator presented here, which distinguishes the approach from Dyson theory. The COT also offers an alternative to Kohn-Sham density functional theory (DFT), whose objective is to represent the electronic density exactly as a single determinant, while paying less attention to the energy spectrum. For any estimate of two-electron terms COT offers a litmus test of its accuracy for principal Ip's and Ea's. This feature for approximating the COT equations is illustrated numerically.

  12. Accuracy of Perceptual and Acoustic Methods for the Detection of Inspiratory Loci in Spontaneous Speech

    PubMed Central

    Wang, Yu-Tsai; Nip, Ignatius S. B.; Green, Jordan R.; Kent, Ray D.; Kent, Jane Finley; Ullman, Cara

    2012-01-01

    The current study investigates the accuracy of perceptually and acoustically determined inspiratory loci in spontaneous speech for the purpose of identifying breath groups. Sixteen participants were asked to talk about simple topics in daily life at a comfortable speaking rate and loudness while connected to a pneumotach and audio microphone. The locations of inspiratory loci were determined based on the aerodynamic signal, which served as a reference for loci identified perceptually and acoustically. Signal detection theory was used to evaluate the accuracy of the methods. The results showed that the greatest accuracy in pause detection was achieved (1) perceptually based on the agreement between at least 2 of the 3 judges; (2) acoustically using a pause duration threshold of 300 ms. In general, the perceptually-based method was more accurate than was the acoustically-based method. Inconsistencies among perceptually-determined, acoustically-determined, and aerodynamically-determined inspiratory loci for spontaneous speech should be weighed in selecting a method of breath-group determination. PMID:22362007

  13. Problem-solving and learning in Carib grackles: individuals show a consistent speed-accuracy trade-off.

    PubMed

    Ducatez, S; Audet, J N; Lefebvre, L

    2015-03-01

    The generation and maintenance of within-population variation in cognitive abilities remain poorly understood. Recent theories propose that this variation might reflect the existence of consistent cognitive strategies distributed along a slow-fast continuum influenced by shyness. The slow-fast continuum might be reflected in the well-known speed-accuracy trade-off, where animals cannot simultaneously maximise the speed and the accuracy with which they perform a task. We test this idea on 49 wild-caught Carib grackles (Quiscalus lugubris), a tame opportunistic generalist Icterid bird in Barbados. Grackles that are fast at solving novel problems involving obstacle removal to reach visible food perform consistently over two different tasks, spend more time per trial attending to both tasks, and are those that show more shyness in a pretest. However, they are also the individuals that make more errors in a colour discrimination task requiring no new motor act. Our data reconcile some of the mixed positive and negative correlations reported in the comparative literature on cognitive tasks, suggesting that a speed-accuracy trade-off could lead to negative correlations between tasks favouring speed and tasks favouring accuracy, but still reveal consistent strategies based on stable individual differences.

  14. Physical activity and individuals with spinal cord injury: accuracy and quality of information on the Internet.

    PubMed

    Jetha, Arif; Faulkner, Guy; Gorczynski, Paul; Arbour-Nicitopoulos, Kelly; Martin Ginis, Kathleen A

    2011-04-01

    A number of websites on the Internet promote health-enhancing behaviors among people with spinal cord injury (SCI). However, the information available is of unknown accuracy and quality. To examine the accuracy, quality, and targeting strategies used in online physical activity (PA) information aimed at people with SCI. A purposive sample of 30 frequently accessed websites for individuals with SCI that included PA information was examined. Websites were evaluated based on their descriptive characteristics, level of accuracy in relation to newly defined PA recommendations for people with SCI, technical and theoretical quality (i.e., use of behavioral theories) characteristics, and targeting strategies to promote PA among people with SCI. Descriptive statistics were utilized to illustrate the results of the evaluation. PA information was easily accessible, as rated by the number of clicks required to access information. Only 6 websites (20%) provided specific PA recommendations and these websites exhibited low accuracy. Technically, websites were of high quality with a mean score of 4.1 of a possible 6 points. In contrast, websites had a low level of theoretical quality, with 23 of the 30 websites (77%) scoring below 9 of a possible 14 points (i.e., 64% of a perfect score) for theoretical content. A majority of websites evaluated did not use cognitive (e.g., self-efficacy, self-talk, and perceived social norms) and behavioral (e.g., self-monitoring, motivational readiness, and realistic goal-setting) strategies in their messages. A majority (80%) of the evaluated websites customized information for persons with different injury levels and completeness. Less than half of the websites evaluated tailored PA information toward people at different stages of their injury rehabilitation (37%) or for their caregivers (30%). Accuracy and theoretical quality of PA information presented to people with SCI on the Internet may not be optimal. Websites should be improved to

  15. Possibilities and limitations of rod-beam theories. [nonlinear distortion tensor and nonlinear stress tensors

    NASA Technical Reports Server (NTRS)

    Peterson, D.

    1979-01-01

    Rod-beam theories are founded on hypotheses such as Bernouilli's suggesting flat cross-sections under deformation. These assumptions, which make rod-beam theories possible, also limit the accuracy of their analysis. It is shown that from a certain order upward terms of geometrically nonlinear deformations contradict the rod-beam hypotheses. Consistent application of differential geometry calculus also reveals differences from existing rod theories of higher order. These differences are explained by simple examples.

  16. What History Can Teach Us about Science: Theory and Experiment, Data and Evidence

    ERIC Educational Resources Information Center

    Levere, Trevor H.

    2006-01-01

    Scientists often use more than the results of experiment to arrive at a result; they use anticipation and analogy to arrive at the results that fit their theories, and sometimes they correct results in the light of analogy. They also need to be clear about the difference between accuracy and precision. They do all this using not only theories, but…

  17. Assessment of flat rolling theories for the use in a model-based controller for high-precision rolling applications

    NASA Astrophysics Data System (ADS)

    Stockert, Sven; Wehr, Matthias; Lohmar, Johannes; Abel, Dirk; Hirt, Gerhard

    2017-10-01

    In the electrical and medical industries the trend towards further miniaturization of devices is accompanied by the demand for smaller manufacturing tolerances. Such industries use a plentitude of small and narrow cold rolled metal strips with high thickness accuracy. Conventional rolling mills can hardly achieve further improvement of these tolerances. However, a model-based controller in combination with an additional piezoelectric actuator for high dynamic roll adjustment is expected to enable the production of the required metal strips with a thickness tolerance of +/-1 µm. The model-based controller has to be based on a rolling theory which can describe the rolling process very accurately. Additionally, the required computing time has to be low in order to predict the rolling process in real-time. In this work, four rolling theories from literature with different levels of complexity are tested for their suitability for the predictive controller. Rolling theories of von Kármán, Siebel, Bland & Ford and Alexander are implemented in Matlab and afterwards transferred to the real-time computer used for the controller. The prediction accuracy of these theories is validated using rolling trials with different thickness reduction and a comparison to the calculated results. Furthermore, the required computing time on the real-time computer is measured. Adequate results according the prediction accuracy can be achieved with the rolling theories developed by Bland & Ford and Alexander. A comparison of the computing time of those two theories reveals that Alexander's theory exceeds the sample rate of 1 kHz of the real-time computer.

  18. Advanced Small Perturbation Potential Flow Theory for Unsteady Aerodynamic and Aeroelastic Analyses

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    2005-01-01

    An advanced small perturbation (ASP) potential flow theory has been developed to improve upon the classical transonic small perturbation (TSP) theories that have been used in various computer codes. These computer codes are typically used for unsteady aerodynamic and aeroelastic analyses in the nonlinear transonic flight regime. The codes exploit the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The new ASP theory was developed methodically by first determining the essential elements required to produce full-potential-like solutions with a small perturbation approach on the requisite Cartesian grid. This level of accuracy required a higher-order streamwise mass flux and a mass conserving surface boundary condition. The ASP theory was further developed by determining the essential elements required to produce results that agreed well with Euler solutions. This level of accuracy required mass conserving entropy and vorticity effects, and second-order terms in the trailing wake boundary condition. Finally, an integral boundary layer procedure, applicable to both attached and shock-induced separated flows, was incorporated for viscous effects. The resulting ASP potential flow theory, including entropy, vorticity, and viscous effects, is shown to be mathematically more appropriate and computationally more accurate than the classical TSP theories. The formulaic details of the ASP theory are described fully and the improvements are demonstrated through careful comparisons with accepted alternative results and experimental data. The new theory has been used as the basis for a new computer code called ASP3D (Advanced Small Perturbation - 3D), which also is briefly described with representative results.

  19. On the accuracy of density functional theory and wave function methods for calculating vertical ionization energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKechnie, Scott; Booth, George H.; Cohen, Aron J.

    The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density-functional theory (DFT) and wave function methods: Hartree-Fock theory (HF), second-order Møller-Plesset perturbation theory (MP2) and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionizationmore » energies obtained from total energy diff calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.« less

  20. A non-asymptotic homogenization theory for periodic electromagnetic structures

    PubMed Central

    Tsukerman, Igor; Markel, Vadim A.

    2014-01-01

    Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions. PMID:25104912

  1. Influence of stimulated Brillouin scattering on positioning accuracy of long-range dual Mach-Zehnder interferometric vibration sensors

    NASA Astrophysics Data System (ADS)

    He, Xiangge; Xie, Shangran; Cao, Shan; Liu, Fei; Zheng, Xiaoping; Zhang, Min; Yan, Han; Chen, Guocai

    2016-11-01

    The properties of noise induced by stimulated Brillouin scattering (SBS) in long-range interferometers and their influences on the positioning accuracy of dual Mach-Zehnder interferometric (DMZI) vibration sensing systems are studied. The SBS noise is found to be white and incoherent between the two arms of the interferometer in a 1-MHz bandwidth range. Experiments on 25-km long fibers show that the root mean square error (RMSE) of the positioning accuracy is consistent with the additive noise model for the time delay estimation theory. A low-pass filter can be properly designed to suppress the SBS noise and further achieve a maximum RMSE reduction of 6.7 dB.

  2. Projected Hartree-Fock theory as a polynomial of particle-hole excitations and its combination with variational coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Qiu, Yiheng; Henderson, Thomas M.; Scuseria, Gustavo E.

    2017-05-01

    Projected Hartree-Fock theory provides an accurate description of many kinds of strong correlations but does not properly describe weakly correlated systems. Coupled cluster theory, in contrast, does the opposite. It therefore seems natural to combine the two so as to describe both strong and weak correlations with high accuracy in a relatively black-box manner. Combining the two approaches, however, is made more difficult by the fact that the two techniques are formulated very differently. In earlier work, we showed how to write spin-projected Hartree-Fock in a coupled-cluster-like language. Here, we fill in the gaps in that earlier work. Further, we combine projected Hartree-Fock and coupled cluster theory in a variational formulation and show how the combination performs for the description of the Hubbard Hamiltonian and for several small molecular systems.

  3. The Effect of Timbre and Vibrato on Vocal Pitch Matching Accuracy

    NASA Astrophysics Data System (ADS)

    Duvvuru, Sirisha

    vibrato did not affect the pitch matching accuracy. However, the interesting finding of the study was that singers attempted to match the timbre of stimuli with vibrato. Results are discussed in terms of interactions between pitch and timbre from auditory perceptual as well as physiological point of view and how current theories of pitch perception relate to this phenomenon. Neither physiological nor auditory perceptual mechanisms provide complete explanations for the results obtained in the study. From a perceptual point of view, an interaction between pitch and timbre seems to be more complex, for spectral and temporal theories are limited in explaining these interactions. Also, possible explanations for the phenomenon of timbre matching are provided.

  4. Last stop on the road to repair: structure of E. coli DNA ligase bound to nicked DNA-adenylate.

    PubMed

    Nandakumar, Jayakrishnan; Nair, Pravin A; Shuman, Stewart

    2007-04-27

    NAD(+)-dependent DNA ligases (LigA) are ubiquitous in bacteria and essential for growth. Their distinctive substrate specificity and domain organization vis-a-vis human ATP-dependent ligases make them outstanding targets for anti-infective drug discovery. We report here the 2.3 A crystal structure of Escherichia coli LigA bound to an adenylylated nick, which captures LigA in a state poised for strand closure and reveals the basis for nick recognition. LigA envelopes the DNA within a protein clamp. Large protein domain movements and remodeling of the active site orchestrate progression through the three chemical steps of the ligation reaction. The structure inspires a strategy for inhibitor design.

  5. On the accuracy and reliability of predictions by control-system theory.

    PubMed

    Bourbon, W T; Copeland, K E; Dyer, V R; Harman, W K; Mosley, B L

    1990-12-01

    In three experiments we used control-system theory (CST) to predict the results of tracking tasks on which people held a handle to keep a cursor even with a target on a computer screen. 10 people completed a total of 104 replications of the task. In each experiment, there were two conditions: in one, only the handle affected the position of the cursor; in the other, a random disturbance also affected the cursor. From a person's performance during Condition 1, we derived constants used in the CST model to predict the results of Condition 2. In two experiments, predictions occurred a few minutes before Condition 2; in one experiment, the delay was 1 yr. During a 1-min. experimental run, the positions of handle and cursor, produced by the person, were each sampled 1800 times, once every 1/30 sec. During a modeling run, the model predicted the positions of the handle and target for each of the 1800 intervals sampled in the experimental run. In 104 replications, the mean correlation between predicted and actual positions of the handle was .996; SD = .002.

  6. Analytic theory of orbit contraction

    NASA Technical Reports Server (NTRS)

    Vinh, N. X.; Longuski, J. M.; Busemann, A.; Culp, R. D.

    1977-01-01

    The motion of a satellite in orbit, subject to atmospheric force and the motion of a reentry vehicle are governed by gravitational and aerodynamic forces. This suggests the derivation of a uniform set of equations applicable to both cases. For the case of satellite motion, by a proper transformation and by the method of averaging, a technique appropriate for long duration flight, the classical nonlinear differential equation describing the contraction of the major axis is derived. A rigorous analytic solution is used to integrate this equation with a high degree of accuracy, using Poincare's method of small parameters and Lagrange's expansion to explicitly express the major axis as a function of the eccentricity. The solution is uniformly valid for moderate and small eccentricities. For highly eccentric orbits, the asymptotic equation is derived directly from the general equation. Numerical solutions were generated to display the accuracy of the analytic theory.

  7. Fundamental theories of waves and particles formulated without classical mass

    NASA Astrophysics Data System (ADS)

    Fry, J. L.; Musielak, Z. E.

    2010-12-01

    Quantum and classical mechanics are two conceptually and mathematically different theories of physics, and yet they do use the same concept of classical mass that was originally introduced by Newton in his formulation of the laws of dynamics. In this paper, physical consequences of using the classical mass by both theories are explored, and a novel approach that allows formulating fundamental (Galilean invariant) theories of waves and particles without formally introducing the classical mass is presented. In this new formulation, the theories depend only on one common parameter called 'wave mass', which is deduced from experiments for selected elementary particles and for the classical mass of one kilogram. It is shown that quantum theory with the wave mass is independent of the Planck constant and that higher accuracy of performing calculations can be attained by such theory. Natural units in connection with the presented approach are also discussed and justification beyond dimensional analysis is given for the particular choice of such units.

  8. Continuous Glucose Monitoring and Trend Accuracy

    PubMed Central

    Gottlieb, Rebecca; Le Compte, Aaron; Chase, J. Geoffrey

    2014-01-01

    Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437

  9. Trait Perception Accuracy and Acquaintance Within Groups: Tracking Accuracy Development.

    PubMed

    Brown, Jill A; Bernieri, Frank

    2017-05-01

    Previous work on trait perception has evaluated accuracy at discrete stages of relationships (e.g., strangers, best friends). A relatively limited body of literature has investigated changes in accuracy as acquaintance within a dyad or group increases. Small groups of initially unacquainted individuals spent more than 30 hr participating in a wide range of activities designed to represent common interpersonal contexts (e.g., eating, traveling). We calculated how accurately each participant judged others in their group on the big five traits across three distinct points within the acquaintance process: zero acquaintance, after a getting-to-know-you conversation, and after 10 weeks of interaction and activity. Judgments of all five traits exhibited accuracy above chance levels after 10 weeks. An examination of the trait rating stability revealed that much of the revision in judgments occurred not over the course of the 10-week relationship as suspected, but between zero acquaintance and the getting-to-know-you conversation.

  10. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  11. Study on Parameter Identification of Assembly Robot based on Screw Theory

    NASA Astrophysics Data System (ADS)

    Yun, Shi; Xiaodong, Zhang

    2017-11-01

    The kinematic model of assembly robot is one of the most important factors affecting repetitive precision. In order to improve the accuracy of model positioning, this paper first establishes the exponential product model of ER16-1600 assembly robot on the basis of screw theory, and then based on iterative least squares method, using ER16-1600 model robot parameter identification. By comparing the experiment before and after the calibration, it is proved that the method has obvious improvement on the positioning accuracy of the assembly robot.

  12. Stationary statistical theory of two-surface multipactor regarding all impacts for efficient threshold analysis

    NASA Astrophysics Data System (ADS)

    Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang

    2018-01-01

    Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.

  13. A comparison of integral equations and density functional theory versus Monte Carlo for hard dumbbells near a hard wall

    NASA Astrophysics Data System (ADS)

    Henderson, Douglas; Quintana, Jacqueline; Sokołowski, Stefan

    1995-03-01

    A comparison of Percus-Yevick-Pynn-Lado model theory and a density functional (DF) theory of nonuniform fluids of nonspherical particles is performed. The DF used is a new generalization of Tarazona's theory. The conclusion is that DF theory provides a preferable route to describe the system under consideration. Its accuracy can be improved with better approximation for the direct correlation function (DCF) for bulk system.

  14. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  15. Assessment of the Applicability of Hertzian Contact Theory to Edge-Loaded Prosthetic Hip Bearings

    PubMed Central

    Sanders, Anthony P.; Brannon, Rebecca M.

    2011-01-01

    The components of prosthetic hip bearings may experience in-vivo subluxation and edge loading on the acetabular socket as a result of joint laxity, causing abnormally high, damaging contact stresses. In this research, edge-loaded contact of prosthetic hips is examined analytically and experimentally in the most commonly used categories of material pairs. In edge-loaded ceramic-on-ceramic hips, Hertzian contact theory yields accurate (conservatively, <10% error) predictions of the contact dimensions. Moreover, Hertzian theory successfully captures slope and curvature trends in the dependence of contact patch geometry on the applied load. In an edge-loaded ceramic-on-metal pair, a similar degree of accuracy is observed in the contact patch length; however, the contact width is less accurately predicted due to the onset of subsurface plasticity, which is predicted for loads >400 N. Hertzian contact theory is shown to be ill-suited to edge-loaded ceramic-on-polyethylene pairs due to polyethylene’s nonlinear material behavior. This work elucidates the methods and the accuracy of applying classical contact theory to edge-loaded hip bearings. The results help to define the applicability of Hertzian theory to the design of new components and materials to better resist severe edge loading contact stresses. PMID:21962465

  16. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  17. Hard sphere perturbation theory for fluids with soft-repulsive-core potentials

    NASA Astrophysics Data System (ADS)

    Ben-Amotz, Dor; Stell, George

    2004-03-01

    The thermodynamic properties of fluids with very soft repulsive-core potentials, resembling those of some liquid metals, are predicted with unprecedented accuracy using a new first-order thermodynamic perturbation theory. This theory is an extension of Mansoori-Canfield/Rasaiah-Stell (MCRS) perturbation theory, obtained by including a configuration integral correction recently identified by Mon, who evaluated it by computer simulation. In this work we derive an analytic expression for Mon's correction in terms of the radial distribution function of the soft-core fluid, g0(r), approximated using Lado's self-consistent extension of Weeks-Chandler-Andersen (WCA) theory. Comparisons with WCA and MCRS predictions show that our new extended-MCRS theory outperforms other first-order theories when applied to fluids with very soft inverse-power potentials (n⩽6), and predicts free energies that are within 0.3kT of simulation results up to the fluid freezing point.

  18. Spectral reflectance inversion with high accuracy on green target

    NASA Astrophysics Data System (ADS)

    Jiang, Le; Yuan, Jinping; Li, Yong; Bai, Tingzhu; Liu, Shuoqiong; Jin, Jianzhou; Shen, Jiyun

    2016-09-01

    Using Landsat-7 ETM remote sensing data, the inversion of spectral reflectance of green wheat in visible and near infrared waveband in Yingke, China is studied. In order to solve the problem of lower inversion accuracy, custom atmospheric conditions method based on moderate resolution transmission model (MODTRAN) is put forward. Real atmospheric parameters are considered when adopting this method. The atmospheric radiative transfer theory to calculate atmospheric parameters is introduced first and then the inversion process of spectral reflectance is illustrated in detail. At last the inversion result is compared with simulated atmospheric conditions method which was a widely used method by previous researchers. The comparison shows that the inversion accuracy of this paper's method is higher in all inversion bands; the inversed spectral reflectance curve by this paper's method is more similar to the measured reflectance curve of wheat and better reflects the spectral reflectance characteristics of green plant which is very different from green artificial target. Thus, whether a green target is a plant or artificial target can be judged by reflectance inversion based on remote sensing image. This paper's research is helpful for the judgment of green artificial target hidden in the greenery, which has a great significance on the precise strike of green camouflaged weapons in military field.

  19. The application of the integral equation theory to study the hydrophobic interaction

    PubMed Central

    Mohorič, Tomaž; Urbic, Tomaz; Hribar-Lee, Barbara

    2014-01-01

    The Wertheim's integral equation theory was tested against newly obtained Monte Carlo computer simulations to describe the potential of mean force between two hydrophobic particles. An excellent agreement was obtained between the theoretical and simulation results. Further, the Wertheim's integral equation theory with polymer Percus-Yevick closure qualitatively correctly (with respect to the experimental data) describes the solvation structure under conditions where the simulation results are difficult to obtain with good enough accuracy. PMID:24437891

  20. Generating relevant kinetic Monte Carlo catalogs using temperature accelerated dynamics with control over the accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Abhijit; Voter, Arthur

    2009-01-01

    We develop a variation of the temperature accelerated dynamics (TAD) method, called the p-TAD method, that efficiently generates an on-the-fly kinetic Monte Carlo (KMC) process catalog with control over the accuracy of the catalog. It is assumed that transition state theory is valid. The p-TAD method guarantees that processes relevant at the timescales of interest to the simulation are present in the catalog with a chosen confidence. A confidence measure associated with the process catalog is derived. The dynamics is then studied using the process catalog with the KMC method. Effective accuracy of a p-TAD calculation is derived when amore » KMC catalog is reused for conditions different from those the catalog was originally generated for. Different KMC catalog generation strategies that exploit the features of the p-TAD method and ensure higher accuracy and/or computational efficiency are presented. The accuracy and the computational requirements of the p-TAD method are assessed. Comparisons to the original TAD method are made. As an example, we study dynamics in sub-monolayer Ag/Cu(110) at the time scale of seconds using the p-TAD method. It is demonstrated that the p-TAD method overcomes several challenges plaguing the conventional KMC method.« less

  1. Accuracy and Variability of Item Parameter Estimates from Marginal Maximum a Posteriori Estimation and Bayesian Inference via Gibbs Samplers

    ERIC Educational Resources Information Center

    Wu, Yi-Fang

    2015-01-01

    Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…

  2. Are Teachers' Implicit Theories of Creativity Related to the Recognition of Their Students' Creativity?

    ERIC Educational Resources Information Center

    Gralewski, Jacek; Karwowski, Maciej

    2018-01-01

    We examine the structure of implicit theories of creativity among Polish high schools teachers and the role those theories play for the accuracy of teachers' assessment of their students' creativity. Latent class analysis revealed the existence of four classes of teachers, whose perception of a creative student differed: two of these classes…

  3. Estimation of genomic prediction accuracy from reference populations with varying degrees of relationship.

    PubMed

    Lee, S Hong; Clark, Sam; van der Werf, Julius H J

    2017-01-01

    Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne). Both the effective number of chromosome segments (Me) and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data) in animal, plant and human genetics.

  4. Social Power Increases Interoceptive Accuracy

    PubMed Central

    Moeini-Jazani, Mehrad; Knoeferle, Klemens; de Molière, Laura; Gatti, Elia; Warlop, Luk

    2017-01-01

    Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research. PMID:28824501

  5. PADÉ APPROXIMANTS FOR THE EQUATION OF STATE FOR RELATIVISTIC HYDRODYNAMICS BY KINETIC THEORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Shang-Hsi; Yang, Jaw-Yen, E-mail: shanghsi@gmail.com

    2015-07-20

    A two-point Padé approximant (TPPA) algorithm is developed for the equation of state (EOS) for relativistic hydrodynamic systems, which are described by the classical Maxwell–Boltzmann statistics and the semiclassical Fermi–Dirac statistics with complete degeneracy. The underlying rational function is determined by the ratios of the macroscopic state variables with various orders of accuracy taken at the extreme relativistic limits. The nonunique TPPAs are validated by Taub's inequality for the consistency of the kinetic theory and the special theory of relativity. The proposed TPPA is utilized in deriving the EOS of the dilute gas and in calculating the specific heat capacity,more » the adiabatic index function, and the isentropic sound speed of the ideal gas. Some general guidelines are provided for the application of an arbitrary accuracy requirement. The superiority of the proposed TPPA is manifested in manipulating the constituent polynomials of the approximants, which avoids the arithmetic complexity of struggling with the modified Bessel functions and the hyperbolic trigonometric functions arising from the relativistic kinetic theory.« less

  6. Peer-group affiliation and adolescent self-esteem: an integration of ego-identity and symbolic-interaction theories.

    PubMed

    Brown, B B; Lohr, M J

    1987-01-01

    To evaluate expectations derived from ego-identity theory and symbolic-interaction theories about the association between self-concept and peer-group affiliations in adolescence, we examined the self-esteem of 221 7th through 12th graders associated by peers with one of five major school crowds and 106 students relatively unknown by classmates and not associated with any school crowd. Among crowd members, self-esteem was directly related to the position of one's crowd in the peer-group status hierarchy (based on both peer-rated and self-perceived crowd affiliation). Outsiders' self-esteem differed in relation to the accuracy of their reflected appraisal of and the salience they attached to crowd affiliation. Crowd members as a whole exhibited higher self-esteem than outsiders as a whole. Differences, however, were mediated by crowd status, salience of crowd affiliation, and the accuracy of reflected appraisals. An adequate interpretation of the findings required an integration of Festinger's (1954, 1957) social comparisons and cognitive-dissonance theories, Cooley's (1902) notions of reflected appraisal, and Newman and Newman's (1976) extrapolations from ego-identity theory.

  7. Spatial and temporal task characteristics as stress: a test of the dynamic adaptability theory of stress, workload, and performance.

    PubMed

    Szalma, James L; Teo, Grace W L

    2012-03-01

    The goal for this study was to test assertions of the dynamic adaptability theory of stress, which proposes two fundamental task dimensions, information rate (temporal properties of a task) and information structure (spatial properties of a task). The theory predicts adaptive stability across stress magnitudes, with progressive and precipitous changes in adaptive response manifesting first as increases in perceived workload and stress and then as performance failure. Information structure was manipulated by varying the number of displays to be monitored (1, 2, 4 or 8 displays). Information rate was manipulated by varying stimulus presentation rate (8, 12, 16, or 20 events/min). A signal detection task was used in which critical signals were pairs of digits that differed by 0 or 1. Performance accuracy declined and workload and stress increased as a function of increased task demand, with a precipitous decline in accuracy at the highest demand levels. However, the form of performance change as well as the pattern of relationships between speed and accuracy and between performance and workload/stress indicates that some aspects of the theory need revision. Implications of the results for the theory and for future research are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. The Long-Term Sustainability of Different Item Response Theory Scaling Methods

    ERIC Educational Resources Information Center

    Keller, Lisa A.; Keller, Robert R.

    2011-01-01

    This article investigates the accuracy of examinee classification into performance categories and the estimation of the theta parameter for several item response theory (IRT) scaling techniques when applied to six administrations of a test. Previous research has investigated only two administrations; however, many testing programs equate tests…

  9. Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps

    NASA Astrophysics Data System (ADS)

    Zlinszky, A.; Kania, A.

    2016-06-01

    Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.

  10. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  11. Network anomaly detection system with optimized DS evidence theory.

    PubMed

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly.

  12. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  13. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  14. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  15. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Eno, Larry; Rabitz, Herschel

    1981-08-01

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.

  16. The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000

    USGS Publications Warehouse

    Fitzpatrick-Lins, Katherine

    1980-01-01

    Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.

  17. The effect of accuracy motivation on anchoring and adjustment: do people adjust from provided anchors?

    PubMed

    Simmons, Joseph P; LeBoeuf, Robyn A; Nelson, Leif D

    2010-12-01

    Increasing accuracy motivation (e.g., by providing monetary incentives for accuracy) often fails to increase adjustment away from provided anchors, a result that has led researchers to conclude that people do not effortfully adjust away from such anchors. We challenge this conclusion. First, we show that people are typically uncertain about which way to adjust from provided anchors and that this uncertainty often causes people to believe that they have initially adjusted too far away from such anchors (Studies 1a and 1b). Then, we show that although accuracy motivation fails to increase the gap between anchors and final estimates when people are uncertain about the direction of adjustment, accuracy motivation does increase anchor-estimate gaps when people are certain about the direction of adjustment, and that this is true regardless of whether the anchors are provided or self-generated (Studies 2, 3a, 3b, and 5). These results suggest that people do effortfully adjust away from provided anchors but that uncertainty about the direction of adjustment makes that adjustment harder to detect than previously assumed. This conclusion has important theoretical implications, suggesting that currently emphasized distinctions between anchor types (self-generated vs. provided) are not fundamental and that ostensibly competing theories of anchoring (selective accessibility and anchoring-and-adjustment) are complementary. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  18. Fully anharmonic IR and Raman spectra of medium-size molecular systems: accuracy and interpretation†

    PubMed Central

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien

    2015-01-01

    Computation of full infrared (IR) and Raman spectra (including absolute intensities and transition energies) for medium- and large-sized molecular systems beyond the harmonic approximation is one of the most interesting challenges of contemporary computational chemistry. Contrary to common beliefs, low-order perturbation theory is able to deliver results of high accuracy (actually often better than those issuing from current direct dynamics approaches) provided that anharmonic resonances are properly managed. This perspective sketches the recent developments in our research group toward the development a robust and user-friendly virtual spectrometer rooted into the second-order vibrational perturbation theory (VPT2) and usable also by non-specialists essentially as a black-box procedure. Several examples are explicitly worked out in order to illustrate the features of our computational tool together with the most important ongoing developments. PMID:24346191

  19. Integral equation and thermodynamic perturbation theory for a two-dimensional model of dimerising fluid

    PubMed Central

    Urbic, Tomaz

    2016-01-01

    In this paper we applied an analytical theory for the two dimensional dimerising fluid. We applied Wertheims thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the dimerising model with arbitrary position of dimerising points from center of the particles. The theory was used to study thermodynamical and structural properties. To check the accuracy of the theories we compared theoretical results with corresponding results obtained by Monte Carlo computer simulations. The theories are accurate for the different positions of patches of the model at all values of the temperature and density studied. IET correctly predicts the pair correlation function of the model. Both TPT and IET are in good agreement with the Monte Carlo values of the energy, pressure, chemical potential, compressibility and ratios of free and bonded particles. PMID:28529396

  20. Integration of genomic information into sport horse breeding programs for optimization of accuracy of selection.

    PubMed

    Haberland, A M; König von Borstel, U; Simianer, H; König, S

    2012-09-01

    Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs.

  1. Algebraic perturbation theory for dense liquids with discrete potentials

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2007-06-01

    A simple theory for the leading-order correction g1(r) to the structure of a hard-sphere liquid with discrete (e.g., square-well) potential perturbations is proposed. The theory makes use of a general approximation that effectively eliminates four-particle correlations from g1(r) with good accuracy at high densities. For the particular case of discrete perturbations, the remaining three-particle correlations can be modeled with a simple volume-exclusion argument, resulting in an algebraic and surprisingly accurate expression for g1(r) . The structure of a discrete “core-softened” model for liquids with anomalous thermodynamic properties is reproduced as an application.

  2. Mechanistic Assessment of DNA Ligase as an Antibacterial Target in Staphylococcus aureus

    PubMed Central

    Podos, Steven D.; Thanassi, Jane A.

    2012-01-01

    We report the use of a known pyridochromanone inhibitor with antibacterial activity to assess the validity of NAD+-dependent DNA ligase (LigA) as an antibacterial target in Staphylococcus aureus. Potent inhibition of purified LigA was demonstrated in a DNA ligation assay (inhibition constant [Ki] = 4.0 nM) and in a DNA-independent enzyme adenylation assay using full-length LigA (50% inhibitory concentration [IC50] = 28 nM) or its isolated adenylation domain (IC50 = 36 nM). Antistaphylococcal activity was confirmed against methicillin-susceptible and -resistant S. aureus (MSSA and MRSA) strains (MIC = 1.0 μg/ml). Analysis of spontaneous resistance potential revealed a high frequency of emergence (4 × 10−7) of high-level resistant mutants (MIC > 64) with associated ligA lesions. There were no observable effects on growth rate in these mutants. Of 22 sequenced clones, 3 encoded point substitutions within the catalytic adenylation domain and 19 in the downstream oligonucleotide-binding (OB) fold and helix-hairpin-helix (HhH) domains. In vitro characterization of the enzymatic properties of four selected mutants revealed distinct signatures underlying their resistance to inhibition. The infrequent adenylation domain mutations altered the kinetics of adenylation and probably elicited resistance directly. In contrast, the highly represented OB fold domain mutations demonstrated a generalized resistance mechanism in which covalent LigA activation proceeds normally and yet the parameters of downstream ligation steps are altered. A resulting decrease in substrate Km and a consequent increase in substrate occupancy render LigA resistant to competitive inhibition. We conclude that the observed tolerance of staphylococcal cells to such hypomorphic mutations probably invalidates LigA as a viable target for antistaphylococcal chemotherapy. PMID:22585221

  3. Computational Relativistic Astrophysics Using the Flowfield-Dependent Variation Theory

    NASA Technical Reports Server (NTRS)

    Richardson, G. A.; Chung, T. J.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Theoretical models, observations and measurements have preoccupied astrophysicists for many centuries. Only in recent years, has the theory of relativity as applied to astrophysical flows met the challenges of how the governing equations can be solved numerically with accuracy and efficiency. Even without the effects of relativity, the physics of magnetohydrodynamic flow instability, turbulence, radiation, and enhanced transport in accretion disks has not been completely resolved. Relativistic effects become pronounced in such cases as jet formation from black hole magnetized accretion disks and also in the study of Gamma-Ray bursts (GRB). Thus, our concern in this paper is to reexamine existing numerical simulation tools as to the accuracy and efficiency of computations and introduce a new approach known as the flowfield-dependent variation (FDV) method. The main feature of the FDV method consists of accommodating discontinuities of shock waves and high gradients of flow variables such as occur in turbulence and unstable motions. In this paper, the physics involved in the solution of relativistic hydrodynamics and solution strategies of the FDV theory are elaborated. The general relativistic astrophysical flow and shock solver (GRAFSS) is introduced, and some simple example problems for Computational Relativistic Astrophysics (CRA) are demonstrated.

  4. Network Anomaly Detection System with Optimized DS Evidence Theory

    PubMed Central

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  5. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  6. A Monte Carlo Study of the Effect of Item Characteristic Curve Estimation on the Accuracy of Three Person-Fit Statistics

    ERIC Educational Resources Information Center

    St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane

    2009-01-01

    To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…

  7. Theory of mind selectively predicts preschoolers’ knowledge-based selective word learning

    PubMed Central

    Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane

    2015-01-01

    Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory of mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children’s preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children’s developing social cognition and early learning. PMID:26211504

  8. Theory of mind selectively predicts preschoolers' knowledge-based selective word learning.

    PubMed

    Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane

    2015-11-01

    Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory-of-mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children's preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children's developing social cognition and early learning. © 2015 The British Psychological Society.

  9. An operational theory of laser-radar selenodesy

    USGS Publications Warehouse

    Wildey, R.L.; Schlier, R.E.; Hull, J.A.; Larson, G.

    1967-01-01

    A theory of the utilization of laser techniques for ranging from the Earth to the Moon for the purpose of providing control points on the lunar surface at which the figure of the Moon is measured to an accuracy at least an order of magnitude better than that of the present astrometric measurements is presented. This, in turn, increases the accuracy of the horizontal selenocentric coordinates of topographical features measured by present astrometric methods. The improvement in the vertical and horizontal coordinates of control points in the Apollo landing zone will aid in the analysis of Unmanned Lunar Orbiter photographs for the selection of Apollo landing sites. The present discussion proposes the means of obtaining the ground control upon which the Orbiter photogrammetry is to be fastened. In addition, a technique of combining Goldstone tracking data to show where the resulting lunar figure is positioned relative to the Moon's center of mass is presented. If corner reflectors are placed on the lunar surface, as suggested by many members of the scientific community, or on a lunar orbiting vehicle, one or more Earth-based laser ranging systems are essential. These reflectors will give enough enhancement in return signal to allow for an additional increase in range accuracy of one to two orders of magnitude. In addition to the primary data on the figure of the Moon, a number of other measurements of scientific importance are then readily obtainable. As far as the measurement of control points is concerned, however, the use of corner reflectors is not essential for the success of this project. Questions regarding the influence on the present shape of the Moon of the frozen tide, isostasy, and past impacts of large asteroids appear in large part answerable through the data which are indicated to be obtainable under the present theory. ?? 1967.

  10. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  11. Application of data fusion technology based on D-S evidence theory in fire detection

    NASA Astrophysics Data System (ADS)

    Cai, Zhishan; Chen, Musheng

    2015-12-01

    Judgment and identification based on single fire characteristic parameter information in fire detection is subject to environmental disturbances, and accordingly its detection performance is limited with the increase of false positive rate and false negative rate. The compound fire detector employs information fusion technology to judge and identify multiple fire characteristic parameters in order to improve the reliability and accuracy of fire detection. The D-S evidence theory is applied to the multi-sensor data-fusion: first normalize the data from all sensors to obtain the normalized basic probability function of the fire occurrence; then conduct the fusion processing using the D-S evidence theory; finally give the judgment results. The results show that the method meets the goal of accurate fire signal identification and increases the accuracy of fire alarm, and therefore is simple and effective.

  12. Geoid undulation accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  13. Matters of accuracy and conventionality: prior accuracy guides children's evaluations of others' actions.

    PubMed

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-03-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clément, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and 4-year-olds were asked to endorse and imitate one of two actors performing an unfamiliar action, one actor who was unconventional but successful and one who was conventional but unsuccessful. These data demonstrated that children preferred endorsing and imitating the unconventional but successful actor. Results suggest that when the accuracy and conventionality of a source are put into conflict, children may give priority to accuracy over conventionality when estimating the source's reliability and, ultimately, when deciding who to trust.

  14. Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models

    NASA Technical Reports Server (NTRS)

    Buchert, T.; Melott, A. L.; Weiss, A. G.

    1993-01-01

    We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the

  15. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  16. Geoid Recovery using Geophysical Inverse Theory Applied to Satellite to Satellite Tracking Data

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.; Frey, H. (Technical Monitor)

    2000-01-01

    This report describes a new method for determination of the geopotential. The analysis is aimed at the GRACE mission. This Satellite-to-Satellite Tracking (SST) mission is viewed as a mapping mission The result will be maps of the geoid. The elements of potential theory, celestial mechanics, and Geophysical Inverse Theory are integrated into a computation architecture, and the results of several simulations presented Centimeter accuracy geoids with 50 to 100 km resolution can be recovered with a 30 to 60 day mission.

  17. In your eyes: does theory of mind predict impaired life functioning in bipolar disorder?

    PubMed

    Purcell, Amanda L; Phillips, Mary; Gruber, June

    2013-12-01

    Deficits in emotion perception and social functioning are strongly implicated in bipolar disorder (BD). Examining theory of mind (ToM) may provide one potential mechanism to explain observed socio-emotional impairments in this disorder. The present study prospectively investigated the relationship between theory of mind performance and life functioning in individuals diagnosed with BD compared to unipolar depression and healthy control groups. Theory of mind (ToM) performance was examined in 26 individuals with remitted bipolar I disorder (BD), 29 individuals with remitted unipolar depression (UD), and 28 healthy controls (CTL) using a well-validated advanced theory of mind task. Accuracy and response latency scores were calculated from the task. Life functioning was measured during a 12 month follow-up session. No group differences for ToM accuracy emerged. However, the BD group exhibited significantly shorter response times than the UD and CTL groups. Importantly, quicker response times in the BD group predicted greater life functioning impairment at a 12-month follow-up, even after controlling for baseline symptoms. The stimuli were static representations of emotional states and do not allow for evaluating the appropriateness of context during emotional communication; due to sample size, neither specific comorbidities nor medication effects were analyzed for the BD and UD groups; preliminary status of theory of mind as a construct. Results suggest that quickened socio-emotional decision making may represent a risk factor for future functional impairment in BD. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Understanding Dyslexia in Children through Human Development Theories

    PubMed Central

    Al-Shidhani, Thuraya Ahmed; Arora, Vinita

    2012-01-01

    Dyslexia is a specific learning disability that is neurological in origin, with an estimated overall worldwide prevalence of 5–10% of the population. It is characterised by difficulties in reading, accuracy, fluency, spelling and decoding abilities. The majority of publications reviewed indicated that screening is performed at the preschool level. Screening can also be conducted at birth or the first year of life. Understanding human development theory, for example, Piaget’s human development theory, may help determine at which stage of childhood development dyslexia is more detectable, and therefore guide the management of this disability. The objective of this review is to provide a brief and updated overview of dyslexia and its management in children through human development issues. PMID:23269949

  19. Rolling bearing fault diagnosis based on information fusion using Dempster-Shafer evidence theory

    NASA Astrophysics Data System (ADS)

    Pei, Di; Yue, Jianhai; Jiao, Jing

    2017-10-01

    This paper presents a fault diagnosis method for rolling bearing based on information fusion. Acceleration sensors are arranged at different position to get bearing vibration data as diagnostic evidence. The Dempster-Shafer (D-S) evidence theory is used to fuse multi-sensor data to improve diagnostic accuracy. The efficiency of the proposed method is demonstrated by the high speed train transmission test bench. The results of experiment show that the proposed method in this paper improves the rolling bearing fault diagnosis accuracy compared with traditional signal analysis methods.

  20. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach.

    PubMed

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat

    2017-09-01

    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All

  1. Vortex Loops at the Superfluid Lambda Transition: An Exact Theory?

    NASA Technical Reports Server (NTRS)

    Williams, Gary A.

    2003-01-01

    A vortex-loop theory of the superfluid lambda transition has been developed over the last decade, with many results in agreement with experiments. It is a very simple theory, consisting of just three basic equations. When it was first proposed the main uncertainty in the theory was the use Flory scaling to find the fractal dimension of the random-walking vortex loops. Recent developments in high-resolution Monte Carlo simulations have now made it possible to verify the accuracy of this Flory-scaling assumption. Although the loop theory is not yet rigorously proven to be exact, the Monte Carlo results show at the least that it is an extremely good approximation. Recent loop calculations of the critical Casimir effect in helium films in the superfluid phase T < Tc will be compared with similar perturbative RG calculations in the normal phase T > Tc; the two calculations are found to match very nicely right at Tc.

  2. Viscosity Prediction for Petroleum Fluids Using Free Volume Theory and PC-SAFT

    NASA Astrophysics Data System (ADS)

    Khoshnamvand, Younes; Assareh, Mehdi

    2018-04-01

    In this study, free volume theory ( FVT) in combination with perturbed-chain statistical associating fluid theory is implemented for viscosity prediction of petroleum reservoir fluids containing ill-defined components such as cuts and plus fractions. FVT has three adjustable parameters for each component to calculate viscosity. These three parameters for petroleum cuts (especially plus fractions) are not available. In this work, these parameters are determined for different petroleum fractions. A model as a function of molecular weight and specific gravity is developed using 22 real reservoir fluid samples with API grades in the range of 22 to 45. Afterward, the proposed model accuracy in comparison with the accuracy of De la Porte et al. with reference to experimental data is presented. The presented model is used for six real samples in an evaluation step, and the results are compared with available experimental data and the method of De la Porte et al. Finally, the method of Lohrenz et al. and the method of Pedersen et al. as two common industrial methods for viscosity calculation are compared with the proposed approach. The absolute average deviation was 9.7 % for free volume theory method, 15.4 % for Lohrenz et al., and 22.16 for Pedersen et al.

  3. Relativistic theory for syntonization of clocks in the vicinity of the Earth

    NASA Technical Reports Server (NTRS)

    Wolf, Peter; Petit, G.

    1995-01-01

    A well known prediction of Einstein's general theory of relativity states that two ideal clocks that move with a relative velocity, and are submitted to different gravitational fields will, in general, be observed to run at different rates. Similarly the rate of a clock with respect to the coordinate time of some spacetime reference system is dependent on the velocity of the clock in that reference system and on the gravitational fields it is submitted to. For the syntonization of clocks and the realization of coordinate times (like TAI) this rate shift has to be taken into account at an accuracy level which should be below the frequency stability of the clocks in question, i.e. all terms that are larger than the instability of the clocks should be corrected for. We present a theory for the calculation of the relativistic rate shift for clocks in the vicinity of the Earth, including all terms larger than one part in 10(exp 18). This, together with previous work on clock synchronization (Petit & Wolf 1993, 1994), amounts to a complete relativistic theory for the realization of coordinate time scales at picosecond synchronization and 10(exp -18) syntonization accuracy, which should be sufficient to accommodate future developments in time transfer and clock technology.

  4. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  5. Materials Data on LiGa5O8 (SG:53) by Materials Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kristin Persson

    Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations

  6. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  7. Typical Local Measurements in Generalized Probabilistic Theories: Emergence of Quantum Bipartite Correlations

    NASA Astrophysics Data System (ADS)

    Kleinmann, Matthias; Osborne, Tobias J.; Scholz, Volkher B.; Werner, Albert H.

    2013-01-01

    What singles out quantum mechanics as the fundamental theory of nature? Here we study local measurements in generalized probabilistic theories (GPTs) and investigate how observational limitations affect the production of correlations. We find that if only a subset of typical local measurements can be made then all the bipartite correlations produced in a GPT can be simulated to a high degree of accuracy by quantum mechanics. Our result makes use of a generalization of Dvoretzky’s theorem for GPTs. The tripartite correlations can go beyond those exhibited by quantum mechanics, however.

  8. Embedded correlated wavefunction schemes: theory and applications.

    PubMed

    Libisch, Florian; Huang, Chen; Carter, Emily A

    2014-09-16

    Conspectus Ab initio modeling of matter has become a pillar of chemical research: with ever-increasing computational power, simulations can be used to accurately predict, for example, chemical reaction rates, electronic and mechanical properties of materials, and dynamical properties of liquids. Many competing quantum mechanical methods have been developed over the years that vary in computational cost, accuracy, and scalability: density functional theory (DFT), the workhorse of solid-state electronic structure calculations, features a good compromise between accuracy and speed. However, approximate exchange-correlation functionals limit DFT's ability to treat certain phenomena or states of matter, such as charge-transfer processes or strongly correlated materials. Furthermore, conventional DFT is purely a ground-state theory: electronic excitations are beyond its scope. Excitations in molecules are routinely calculated using time-dependent DFT linear response; however applications to condensed matter are still limited. By contrast, many-electron wavefunction methods aim for a very accurate treatment of electronic exchange and correlation. Unfortunately, the associated computational cost renders treatment of more than a handful of heavy atoms challenging. On the other side of the accuracy spectrum, parametrized approaches like tight-binding can treat millions of atoms. In view of the different (dis-)advantages of each method, the simulation of complex systems seems to force a compromise: one is limited to the most accurate method that can still handle the problem size. For many interesting problems, however, compromise proves insufficient. A possible solution is to break up the system into manageable subsystems that may be treated by different computational methods. The interaction between subsystems may be handled by an embedding formalism. In this Account, we review embedded correlated wavefunction (CW) approaches and some applications. We first discuss our

  9. Rare, but obviously there: effects of target frequency and salience on visual search accuracy.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Mitroff, Stephen R

    2014-10-01

    Accuracy can be extremely important for many visual search tasks. However, numerous factors work to undermine successful search. Several negative influences on search have been well studied, yet one potentially influential factor has gone almost entirely unexplored-namely, how is search performance affected by the likelihood that a specific target might appear? A recent study demonstrated that when specific targets appear infrequently (i.e., once in every thousand trials) they were, on average, not often found. Even so, some infrequently appearing targets were actually found quite often, suggesting that the targets' frequency is not the only factor at play. Here, we investigated whether salience (i.e., the extent to which an item stands out during search) could explain why some infrequent targets are easily found whereas others are almost never found. Using the mobile application Airport Scanner, we assessed how individual target frequency and salience interacted in a visual search task that included a wide array of targets and millions of trials. Target frequency and salience were both significant predictors of search accuracy, although target frequency explained more of the accuracy variance. Further, when examining only the rarest target items (those that appeared on less than 0.15% of all trials), there was a significant relationship between salience and accuracy such that less salient items were less likely to be found. Beyond implications for search theory, these data suggest significant vulnerability for real-world searches that involve targets that are both infrequent and hard-to-spot. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The Effect of Flexible Pavement Mechanics on the Accuracy of Axle Load Sensors in Vehicle Weigh-in-Motion Systems.

    PubMed

    Burnos, Piotr; Rys, Dawid

    2017-09-07

    Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy.

  11. Characterisation of energy response of Al2O3:C optically stimulated luminescent dosemeters (OSLDs) using cavity theory

    PubMed Central

    Scarboro, S. B.; Kry, S. F.

    2013-01-01

    Aluminium oxide (Al2O3:C) is a common material used in optically stimulated luminescent dosemeters (OSLDs). OSLDs have a known energy dependence, which can impact on the accuracy of dose measurements, especially for lower photon energies, where the dosemeter can overrespond by a factor of 3–4. The purpose of this work was to characterise the response of Al2O3:C using cavity theory and to evaluate the applicability of this approach for polyenergetic photon beams. The cavity theory energy response showed good agreement (within 2 %) with the corresponding measured values. A comparison with measured values reported in the literature for low-energy polyenergetic spectra showed more varied agreement (within 6 % on average). The discrepancy between these results is attributed to differences in the raw photon energy spectra used to calculate the energy response. Analysis of the impact of the photon energy spectra versus the mean photon energy showed improved accuracy if the energy response was determined using the entire photon spectrum rather than the mean photon energy. If not accounted for, the overresponse due to photon energy could introduce substantial inaccuracy in dose measurement using OSLDs, and the results of this study indicate that cavity theory may be used to determine the response with reasonable accuracy. PMID:22653437

  12. The value of item response theory in clinical assessment: a review.

    PubMed

    Thomas, Michael L

    2011-09-01

    Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. Although IRT has become prevalent in the measurement of ability and achievement, its contributions to clinical domains have been less extensive. Applications of IRT to clinical assessment are reviewed to appraise its current and potential value. Benefits of IRT include comprehensive analyses and reduction of measurement error, creation of computer adaptive tests, meaningful scaling of latent variables, objective calibration and equating, evaluation of test and item bias, greater accuracy in the assessment of change due to therapeutic intervention, and evaluation of model and person fit. The theory may soon reinvent the manner in which tests are selected, developed, and scored. Although challenges remain to the widespread implementation of IRT, its application to clinical assessment holds great promise. Recommendations for research, test development, and clinical practice are provided.

  13. Integrated control-system design via generalized LQG (GLQG) theory

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Hyland, David C.; Richter, Stephen; Haddad, Wassim M.

    1989-01-01

    Thirty years of control systems research has produced an enormous body of theoretical results in feedback synthesis. Yet such results see relatively little practical application, and there remains an unsettling gap between classical single-loop techniques (Nyquist, Bode, root locus, pole placement) and modern multivariable approaches (LQG and H infinity theory). Large scale, complex systems, such as high performance aircraft and flexible space structures, now demand efficient, reliable design of multivariable feedback controllers which optimally tradeoff performance against modeling accuracy, bandwidth, sensor noise, actuator power, and control law complexity. A methodology is described which encompasses numerous practical design constraints within a single unified formulation. The approach, which is based upon coupled systems or modified Riccati and Lyapunov equations, encompasses time-domain linear-quadratic-Gaussian theory and frequency-domain H theory, as well as classical objectives such as gain and phase margin via the Nyquist circle criterion. In addition, this approach encompasses the optimal projection approach to reduced-order controller design. The current status of the overall theory will be reviewed including both continuous-time and discrete-time (sampled-data) formulations.

  14. Feature instructions improve face-matching accuracy

    PubMed Central

    Bindemann, Markus

    2018-01-01

    Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822

  15. Item response theory - A first approach

    NASA Astrophysics Data System (ADS)

    Nunes, Sandra; Oliveira, Teresa; Oliveira, Amílcar

    2017-07-01

    The Item Response Theory (IRT) has become one of the most popular scoring frameworks for measurement data, frequently used in computerized adaptive testing, cognitively diagnostic assessment and test equating. According to Andrade et al. (2000), IRT can be defined as a set of mathematical models (Item Response Models - IRM) constructed to represent the probability of an individual giving the right answer to an item of a particular test. The number of Item Responsible Models available to measurement analysis has increased considerably in the last fifteen years due to increasing computer power and due to a demand for accuracy and more meaningful inferences grounded in complex data. The developments in modeling with Item Response Theory were related with developments in estimation theory, most remarkably Bayesian estimation with Markov chain Monte Carlo algorithms (Patz & Junker, 1999). The popularity of Item Response Theory has also implied numerous overviews in books and journals, and many connections between IRT and other statistical estimation procedures, such as factor analysis and structural equation modeling, have been made repeatedly (Van der Lindem & Hambleton, 1997). As stated before the Item Response Theory covers a variety of measurement models, ranging from basic one-dimensional models for dichotomously and polytomously scored items and their multidimensional analogues to models that incorporate information about cognitive sub-processes which influence the overall item response process. The aim of this work is to introduce the main concepts associated with one-dimensional models of Item Response Theory, to specify the logistic models with one, two and three parameters, to discuss some properties of these models and to present the main estimation procedures.

  16. Predicting Bond Dissociation Energies of Transition-Metal Compounds by Multiconfiguration Pair-Density Functional Theory and Second-Order Perturbation Theory Based on Correlated Participating Orbitals and Separated Pairs.

    PubMed

    Bao, Junwei Lucas; Odoh, Samuel O; Gagliardi, Laura; Truhlar, Donald G

    2017-02-14

    We study the performance of multiconfiguration pair-density functional theory (MC-PDFT) and multireference perturbation theory for the computation of the bond dissociation energies in 12 transition-metal-containing diatomic molecules and three small transition-metal-containing polyatomic molecules and in two transition-metal dimers. The first step is a multiconfiguration self-consistent-field calculation, for which two choices must be made: (i) the active space and (ii) its partition into subspaces, if the generalized active space formulation is used. In the present work, the active space is chosen systematically by using three correlated-participating-orbitals (CPO) schemes, and the partition is chosen by using the separated-pair (SP) approximation. Our calculations show that MC-PDFT generally has similar accuracy to CASPT2, and the active-space dependence of MC-PDFT is not very great for transition-metal-ligand bond dissociation energies. We also find that the SP approximation works very well, and in particular SP with the fully translated BLYP functional SP-ftBLYP is more accurate than CASPT2. SP greatly reduces the number of configuration state functions relative to CASSCF. For the cases of FeO and NiO with extended-CPO active space, for which complete active space calculations are unaffordable, SP calculations are not only affordable but also of satisfactory accuracy. All of the MC-PDFT results are significantly better than the corresponding results with broken-symmetry spin-unrestricted Kohn-Sham density functional theory. Finally we test a perturbation theory method based on the SP reference and find that it performs slightly worse than CASPT2 calculations, and for most cases of the nominal-CPO active space, the approximate SP perturbation theory calculations are less accurate than the much less expensive SP-PDFT calculations.

  17. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  18. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eno, L.; Rabitz, H.

    1981-08-15

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less

  19. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage errormore » of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.« less

  20. An Integrated Theory of Attention and Decision Making in Visual Signal Detection

    ERIC Educational Resources Information Center

    Smith, Philip L.; Ratcliff, Roger

    2009-01-01

    The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in…

  1. Privacy-Preserving Accountable Accuracy Management Systems (PAAMS)

    NASA Astrophysics Data System (ADS)

    Thomas, Roshan K.; Sandhu, Ravi; Bertino, Elisa; Arpinar, Budak; Xu, Shouhuai

    We argue for the design of “Privacy-preserving Accountable Accuracy Management Systems (PAAMS)”. The designs of such systems recognize from the onset that accuracy, accountability, and privacy management are intertwined. As such, these systems have to dynamically manage the tradeoffs between these (often conflicting) objectives. For example, accuracy in such systems can be improved by providing better accountability links between structured and unstructured information. Further, accuracy may be enhanced if access to private information is allowed in controllable and accountable ways. Our proposed approach involves three key elements. First, a model to link unstructured information such as that found in email, image and document repositories with structured information such as that in traditional databases. Second, a model for accuracy management and entity disambiguation by proactively preventing, detecting and tracing errors in information bases. Third, a model to provide privacy-governed operation as accountability and accuracy are managed.

  2. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  3. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  4. Voice Identification: Levels-of-Processing and the Relationship between Prior Description Accuracy and Recognition Accuracy.

    ERIC Educational Resources Information Center

    Walter, Todd J.

    A study examined whether a person's ability to accurately identify a voice is influenced by factors similar to those proposed by the Supreme Court for eyewitness identification accuracy. In particular, the Supreme Court has suggested that a person's prior description accuracy of a suspect, degree of attention to a suspect, and confidence in…

  5. Accuracy of ab initio electron correlation and electron densities in vanadium dioxide

    NASA Astrophysics Data System (ADS)

    Kylänpää, Ilkka; Balachandran, Janakiraman; Ganesh, Panchapakesan; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.

    2017-11-01

    Diffusion quantum Monte Carlo results are used as a reference to analyze properties related to phase stability and magnetism in vanadium dioxide computed with various formulations of density functional theory. We introduce metrics related to energetics, electron densities and spin densities that give us insight on both local and global variations in the antiferromagnetic M1 and R phases. Importantly, these metrics can address contributions arising from the challenging description of the 3 d orbital physics in this material. We observe that the best description of energetics between the structural phases does not correspond to the best accuracy in the charge density, which is consistent with observations made recently by Medvedev et al. [Science 355, 371 (2017), 10.1126/science.aag0410] in the context of isolated atoms. However, we do find evidence that an accurate spin density connects to correct energetic ordering of different magnetic states in VO2, although local, semilocal, and meta-GGA functionals tend to erroneously favor demagnetization of the vanadium sites. The recently developed SCAN functional stands out as remaining nearly balanced in terms of magnetization across the M1-R transition and correctly predicting the ground state crystal structure. In addition to ranking current density functionals, our reference energies and densities serve as important benchmarks for future functional development. With our reference data, the accuracy of both the energy and the electron density can be monitored simultaneously, which is useful for functional development. So far, this kind of detailed high accuracy reference data for correlated materials has been absent from the literature.

  6. Multiconfiguration Pair-Density Functional Theory Predicts Spin-State Ordering in Iron Complexes with the Same Accuracy as Complete Active Space Second-Order Perturbation Theory at a Significantly Reduced Computational Cost.

    PubMed

    Wilbraham, Liam; Verma, Pragya; Truhlar, Donald G; Gagliardi, Laura; Ciofini, Ilaria

    2017-05-04

    The spin-state orderings in nine Fe(II) and Fe(III) complexes with ligands of diverse ligand-field strength were investigated with multiconfiguration pair-density functional theory (MC-PDFT). The performance of this method was compared to that of complete active space second-order perturbation theory (CASPT2) and Kohn-Sham density functional theory. We also investigated the dependence of CASPT2 and MC-PDFT results on the size of the active-space. MC-PDFT reproduces the CASPT2 spin-state ordering, the dependence on the ligand field strength, and the dependence on active space at a computational cost that is significantly reduced as compared to CASPT2.

  7. Cystic Duct Closure by Sealing With Bipolar Electrocoagulation

    PubMed Central

    Damgaard, B.; Jorgensen, L. N.; Larsen, S. S.; Kristiansen, V. B.

    2010-01-01

    Background: Cystic duct leakage after cholecystectomy is not uncommon and is a potentially serious complication. The aim of this study was to assess a bipolar sealing system (LigaSure®) for closure of the cystic duct. Methods: The records from consecutive laparoscopic cholecystectomies performed in 2 hospitals with closure of the cystic duct with LigaSure after informed consent were recorded and complications and morbidity registered. The records were compared with those of patients undergoing laparoscopic cholecystectomy with closure of the cystic duct with clips during the same period. Results: During the study period, 218 laparoscopic cholecystectomies were performed; 102 of these were performed with the LigaSure. One patient was excluded due to violation of the protocol. We experienced no cases of cystic duct leakage, but in one patient, bile leakage from the gallbladder bed was observed probably due to a small aberrant duct. Conclusion: The LigaSure system was safe and effective for closure and division of the cystic duct in laparoscopic cholecystectomy. PMID:20412641

  8. An extension of the local momentum theory to a distorted wake model of a hovering rotor

    NASA Technical Reports Server (NTRS)

    Kawachi, K.

    1981-01-01

    The local momentum theory is based on the instantaneous balance between the fluid momentum and the blade elemental lift at a local station in the rotor rotational plane. Therefore, the theory has the capability of evaluating time wise variations of air loading and induced velocity distributions along a helicopter blade span. Unlike a complex vortex theory, this theory was developed to analyze the instantaneous induced velocity distribution effectively. The boundaries of this theory and a computer program using this theory are discussed. A concept introduced into the theory is the effect of the rotor wake contraction in hovering flight. A comparison of this extended local momentum theory with a prescribed wake vortex theory is also presented. The results indicate that the extended local momentum theory has the capability of achieving a level of accuracy similar to that of the prescribed wake vortex theory over wide range variations of rotor geometrical parameters. It is also shown that the analytical results obtained using either theory are in reasonable agreement with experimental data.

  9. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook

    2015-03-07

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less

  10. A refined shear deformation theory for the analysis of laminated plates

    NASA Technical Reports Server (NTRS)

    Reddy, J. N.

    1986-01-01

    A refined, third-order plate theory that accounts for the transverse shear strains is presented, the Navier solutions are derived for certain simply supported cross-ply and antisymmetric angle-ply laminates, and finite-element models are developed for general laminates. The new theory does not require the shear correction factors of the first-order theory (i.e., the Reissner-Mindlin plate theory) because the transverse shear stresses are represented parabolically in the present theory. A mixed finite-element model that uses independent approximations of the generalized displacements and generalized moments, and a displacement model that uses only the generalized displacements as degrees of freedom are developed. The displacement model requires C sup 1-continuity of the transverse deflection across the inter-element boundaries, whereas the mixed model requires a C sup 0-element. Also, the mixed model does not require continuous approximations (between elements) of the bending moments. Numerical results are presented to show the accuracy of the present theory in predicting the transverse stresses. Numerical results are also presented for the nonlinear bending of plates, and the results compare well with the experimental results available in the literature.

  11. Genetic algorithm and graph theory based matrix factorization method for online friend recommendation.

    PubMed

    Li, Qu; Yao, Min; Yang, Jianhua; Xu, Ning

    2014-01-01

    Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.

  12. Contribution to the theory of propeller vibrations

    NASA Technical Reports Server (NTRS)

    Liebers, F

    1930-01-01

    This report presents a calculation of the torsional frequencies of revolving bars with allowance for the air forces. Calculation of the flexural or bonding frequencies of revolving straight or tapered bars in terms of the angular velocity of revolution. Calculation on the basis of Rayleigh's principle of variation. There is also a discussion of error estimation and the accuracy of results. The author then provides an application of the theory to screw propellers for airplanes and the discusses the liability of propellers to damage through vibrations due to lack of uniform loading.

  13. Multiconfiguration Pair-Density Functional Theory.

    PubMed

    Li Manni, Giovanni; Carlson, Rebecca K; Luo, Sijie; Ma, Dongxia; Olsen, Jeppe; Truhlar, Donald G; Gagliardi, Laura

    2014-09-09

    We present a new theoretical framework, called Multiconfiguration Pair-Density Functional Theory (MC-PDFT), which combines multiconfigurational wave functions with a generalization of density functional theory (DFT). A multiconfigurational self-consistent-field (MCSCF) wave function with correct spin and space symmetry is used to compute the total electronic density, its gradient, the on-top pair density, and the kinetic and Coulomb contributions to the total electronic energy. We then use a functional of the total density, its gradient, and the on-top pair density to calculate the remaining part of the energy, which we call the on-top-density-functional energy in contrast to the exchange-correlation energy of Kohn-Sham DFT. Because the on-top pair density is an element of the two-particle density matrix, this goes beyond the Hohenberg-Kohn theorem that refers only to the one-particle density. To illustrate the theory, we obtain first approximations to the required new type of density functionals by translating conventional density functionals of the spin densities using a simple prescription, and we perform post-SCF density functional calculations using the total density, density gradient, and on-top pair density from the MCSCF calculations. Double counting of dynamic correlation or exchange does not occur because the MCSCF energy is not used. The theory is illustrated by applications to the bond energies and potential energy curves of H2, N2, F2, CaO, Cr2, and NiCl and the electronic excitation energies of Be, C, N, N(+), O, O(+), Sc(+), Mn, Co, Mo, Ru, N2, HCHO, C4H6, c-C5H6, and pyrazine. The method presented has a computational cost and scaling similar to MCSCF, but a quantitative accuracy, even with the present first approximations to the new types of density functionals, that is comparable to much more expensive multireference perturbation theory methods.

  14. The application of the thermodynamic perturbation theory to study the hydrophobic hydration.

    PubMed

    Mohoric, Tomaz; Urbic, Tomaz; Hribar-Lee, Barbara

    2013-07-14

    The thermodynamic perturbation theory was tested against newly obtained Monte Carlo computer simulations to describe the major features of the hydrophobic effect in a simple 3D-Mercedes-Benz water model: the temperature and hydrophobe size dependence on entropy, enthalpy, and free energy of transfer of a simple hydrophobic solute into water. An excellent agreement was obtained between the theoretical and simulation results. Further, the thermodynamic perturbation theory qualitatively correctly (with respect to the experimental data) describes the solvation thermodynamics under conditions where the simulation results are difficult to obtain with good enough accuracy, e.g., at high pressures.

  15. The application of the thermodynamic perturbation theory to study the hydrophobic hydration

    PubMed Central

    Mohorič, Tomaž; Urbic, Tomaz; Hribar-Lee, Barbara

    2013-01-01

    The thermodynamic perturbation theory was tested against newly obtained Monte Carlo computer simulations to describe the major features of the hydrophobic effect in a simple 3D-Mercedes-Benz water model: the temperature and hydrophobe size dependence on entropy, enthalpy, and free energy of transfer of a simple hydrophobic solute into water. An excellent agreement was obtained between the theoretical and simulation results. Further, the thermodynamic perturbation theory qualitatively correctly (with respect to the experimental data) describes the solvation thermodynamics under conditions where the simulation results are difficult to obtain with good enough accuracy, e.g., at high pressures. PMID:23862923

  16. The application of the thermodynamic perturbation theory to study the hydrophobic hydration

    NASA Astrophysics Data System (ADS)

    Mohorič, Tomaž; Urbic, Tomaz; Hribar-Lee, Barbara

    2013-07-01

    The thermodynamic perturbation theory was tested against newly obtained Monte Carlo computer simulations to describe the major features of the hydrophobic effect in a simple 3D-Mercedes-Benz water model: the temperature and hydrophobe size dependence on entropy, enthalpy, and free energy of transfer of a simple hydrophobic solute into water. An excellent agreement was obtained between the theoretical and simulation results. Further, the thermodynamic perturbation theory qualitatively correctly (with respect to the experimental data) describes the solvation thermodynamics under conditions where the simulation results are difficult to obtain with good enough accuracy, e.g., at high pressures.

  17. Effects of magnetometer calibration and maneuvers on accuracies of magnetometer-only attitude-and-rate determination

    NASA Technical Reports Server (NTRS)

    Challa, M.; Natanson, G.

    1998-01-01

    Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.

  18. A Refined Zigzag Beam Theory for Composite and Sandwich Beams

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Sciuva, Marco Di; Gherlone, Marco

    2009-01-01

    A new refined theory for laminated composite and sandwich beams that contains the kinematics of the Timoshenko Beam Theory as a proper baseline subset is presented. This variationally consistent theory is derived from the virtual work principle and employs a novel piecewise linear zigzag function that provides a more realistic representation of the deformation states of transverse-shear flexible beams than other similar theories. This new zigzag function is unique in that it vanishes at the top and bottom bounding surfaces of a beam. The formulation does not enforce continuity of the transverse shear stress across the beam s cross-section, yet is robust. Two major shortcomings that are inherent in the previous zigzag theories, shear-force inconsistency and difficulties in simulating clamped boundary conditions, and that have greatly limited the utility of these previous theories are discussed in detail. An approach that has successfully resolved these shortcomings is presented herein. Exact solutions for simply supported and cantilevered beams subjected to static loads are derived and the improved modelling capability of the new zigzag beam theory is demonstrated. In particular, extensive results for thick beams with highly heterogeneous material lay-ups are discussed and compared with corresponding results obtained from elasticity solutions, two other zigzag theories, and high-fidelity finite element analyses. Comparisons with the baseline Timoshenko Beam Theory are also presented. The comparisons clearly show the improved accuracy of the new, refined zigzag theory presented herein over similar existing theories. This new theory can be readily extended to plate and shell structures, and should be useful for obtaining relatively low-cost, accurate estimates of structural response needed to design an important class of high-performance aerospace structures.

  19. No special K! A signal detection framework for the strategic regulation of memory accuracy.

    PubMed

    Higham, Philip A

    2007-02-01

    Two experiments investigated criterion setting and metacognitive processes underlying the strategic regulation of accuracy on the Scholastic Aptitude Test (SAT) using Type-2 signal detection theory (SDT). In Experiment 1, report bias was manipulated by penalizing participants either 0.25 (low incentive) or 4 (high incentive) points for each error. Best guesses to unanswered items were obtained so that Type-2 signal detection indices of discrimination and bias could be calculated. The same incentive manipulation was used in Experiment 2, only the test was computerized, confidence ratings were taken so that receiver operating characteristic (ROC) curves could be generated, and feedback was manipulated. The results of both experiments demonstrated that SDT provides a viable alternative to A. Koriat and M. Goldsmith's (1996c) framework of monitoring and control and reveals information about the regulation of accuracy that their framework does not. For example, ROC analysis indicated that the threshold model implied by formula scoring is inadequate. Instead, performance on the SAT should be modeled with an equal-variance Gaussian, Type-2 signal detection model. ((c) 2007 APA, all rights reserved).

  20. Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.

    PubMed

    Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd

    2018-05-06

    The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this

  1. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  2. Scanning tunneling microscopy current from localized basis orbital density functional theory

    NASA Astrophysics Data System (ADS)

    Gustafsson, Alexander; Paulsson, Magnus

    2016-03-01

    We present a method capable of calculating elastic scanning tunneling microscopy (STM) currents from localized atomic orbital density functional theory (DFT). To overcome the poor accuracy of the localized orbital description of the wave functions far away from the atoms, we propagate the wave functions, using the total DFT potential. From the propagated wave functions, the Bardeen's perturbative approach provides the tunneling current. To illustrate the method we investigate carbon monoxide adsorbed on a Cu(111) surface and recover the depression/protrusion observed experimentally with normal/CO-functionalized STM tips. The theory furthermore allows us to discuss the significance of s - and p -wave tips.

  3. Systematic review and network meta-analysis comparing clinical outcomes and effectiveness of surgical treatments for haemorrhoids.

    PubMed

    Simillis, C; Thoukididou, S N; Slesser, A A P; Rasheed, S; Tan, E; Tekkis, P P

    2015-12-01

    The aim was to compare the clinical outcomes and effectiveness of surgical treatments for haemorrhoids. Randomized clinical trials were identified by means of a systematic review. A Bayesian network meta-analysis was performed using the Markov chain Monte Carlo method in WinBUGS. Ninety-eight trials were included with 7827 participants and 11 surgical treatments for grade III and IV haemorrhoids. Open, closed and radiofrequency haemorrhoidectomies resulted in significantly more postoperative complications than transanal haemorrhoidal dearterialization (THD), LigaSure™ and Harmonic® haemorrhoidectomies. THD had significantly less postoperative bleeding than open and stapled procedures, and resulted in significantly fewer emergency reoperations than open, closed, stapled and LigaSure™ haemorrhoidectomies. Open and closed haemorrhoidectomies resulted in more pain on postoperative day 1 than stapled, THD, LigaSure™ and Harmonic® procedures. After stapled, LigaSure™ and Harmonic® haemorrhoidectomies patients resumed normal daily activities earlier than after open and closed procedures. THD provided the earliest time to first bowel movement. The stapled and THD groups had significantly higher haemorrhoid recurrence rates than the open, closed and LigaSure™ groups. Recurrence of haemorrhoidal symptoms was more common after stapled haemorrhoidectomy than after open and LigaSure™ operations. No significant difference was identified between treatments for anal stenosis, incontinence and perianal skin tags. Open and closed haemorrhoidectomies resulted in more postoperative complications and slower recovery, but fewer haemorrhoid recurrences. THD and stapled haemorrhoidectomies were associated with decreased postoperative pain and faster recovery, but higher recurrence rates. The advantages and disadvantages of each surgical treatment should be discussed with the patient before surgery to allow an informed decision to be made. © 2015 BJS Society Ltd Published

  4. Cadastral Database Positional Accuracy Improvement

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  5. Continuum theory for cluster morphologies of soft colloids.

    PubMed

    Kosmrlj, A; Pauschenwein, G J; Kahl, G; Ziherl, P

    2011-06-09

    We introduce a continuum description of the thermodynamics of colloids with a core-corona architecture. In the case of thick coronas, their overlap can be treated approximately by replacing the exact one-particle density distribution by a suitably shaped step profile, which provides a convenient way of modeling the spherical, columnar, lamellar, and inverted cluster morphologies predicted by numerical simulations and the more involved theories. We use the model to study monodisperse particles with the hard-core/square-shoulder pair interaction as the simplest representatives of the core-corona class. We derive approximate analytical expressions for the enthalpies of the cluster morphologies which offer a clear insight into the mechanisms at work, and we calculate the lattice spacing and the cluster size for all morphologies of the phase sequence as well as the phase-transition pressures. By comparing the results with the exact crystalline minimum-enthalpy configurations, we show that the accuracy of the theory increases with shoulder width. We discuss possible extensions of the theory that could account for the finite-temperature effects.

  6. Accuracy vs. Fluency: Which Comes First in ESL Instruction?

    ERIC Educational Resources Information Center

    Ebsworth, Miriam Eisenstein

    1998-01-01

    Discusses the debate over fluency versus accuracy in teaching English-as-a-Second-Language (ESL). Defines fluency and accuracy; examines alternative approaches (meaning first, accuracy first, and accuracy and fluency from the beginning); evaluates the alternatives; and highlights implications for teaching ESL. A sidebar presents an accuracy and…

  7. Accuracy Of LTPP Traffic Loading Estimates

    DOT National Transportation Integrated Search

    1998-07-01

    The accuracy and reliability of traffic load estimates are key to determining a pavement's life expectancy. To better understand the variability of traffic loading rates and its effect on the accuracy of the Long Term Pavement Performance (LTPP) prog...

  8. ON THE ACCURACY OF THE PROPAGATION THEORY AND THE QUALITY OF BACKGROUND OBSERVATIONS IN A SCHUMANN RESONANCE INVERSION PROCEDURE Vadim MUSHTAK, Earle WILLIAMS PARSONS LABORATORY, MIT

    NASA Astrophysics Data System (ADS)

    Mushtak, V. C.

    2009-12-01

    Observations of electromagnetic fields in the Schumann resonance (SR) frequency range (5 to 40 Hz) contain information about both the major source of the electromagnetic radiation (repeatedly confirmed to be global lightning activity) and the source-to-observer propagation medium (the Earth-ionosphere waveguide). While the electromagnetic signatures from individual lightning discharges provide preferable experimental material for exploring the medium, the properties of the world-wide lightning process are best reflected in background spectral SR observations. In the latter, electromagnetic contributions from thousands of lightning discharges are accumulated in intervals of about 10-15 minutes - long enough to present a statistically significant (and so theoretically treatable) ensemble of individual flashes, and short enough to reflect the spatial-temporal dynamics of global lightning activity. Thanks to the small (well below 1 dB/Mm) attenuation in the SR range and the accumulated nature of background SR observations, the latter present globally integrated information about lightning activity not available via other (satellite, meteorological) techniques. The most interesting characteristics to be extracted in an inversion procedure are the rates of vertical charge moment change (and their temporal variations) in the major global lightning “chimneys”. The success of such a procedure depends critically on the accuracy of the propagation theory (used to carry out “direct” calculations for the inversion) and the quality of experimental material. Due to the nature of the problem, both factors - the accuracy and the quality - can only be estimated indirectly, which requires specific approaches to assure that the estimates are realistic and more importantly, that the factors could be improved. For the first factor, simulations show that the widely exploited theory of propagation in a uniform (spherically symmetrical) waveguide provides unacceptable (up to

  9. Evaluation of Automatic Vehicle Location accuracy

    DOT National Transportation Integrated Search

    1999-01-01

    This study assesses the accuracy of the Automatic Vehicle Location (AVL) data provided for the buses of the Ann Arbor Transportation Authority with Global Positioning System (GPS) technology. In a sample of eighty-nine bus trips two kinds of accuracy...

  10. Communication Accuracy in Magazine Science Reporting.

    ERIC Educational Resources Information Center

    Borman, Susan Cray

    1978-01-01

    Evaluators with scientific expertise who analyzed the accuracy of popularized science news in mass circulation magazines found that the over-all accuracy of the magazine articles was good, and that the major problem was the omission of relevant information. (GW)

  11. Anxiety, anticipation and contextual information: A test of attentional control theory.

    PubMed

    Cocks, Adam J; Jackson, Robin C; Bishop, Daniel T; Williams, A Mark

    2016-09-01

    We tested the assumptions of Attentional Control Theory (ACT) by examining the impact of anxiety on anticipation using a dynamic, time-constrained task. Moreover, we examined the involvement of high- and low-level cognitive processes in anticipation and how their importance may interact with anxiety. Skilled and less-skilled tennis players anticipated the shots of opponents under low- and high-anxiety conditions. Participants viewed three types of video stimuli, each depicting different levels of contextual information. Performance effectiveness (response accuracy) and processing efficiency (response accuracy divided by corresponding mental effort) were measured. Skilled players recorded higher levels of response accuracy and processing efficiency compared to less-skilled counterparts. Processing efficiency significantly decreased under high- compared to low-anxiety conditions. No difference in response accuracy was observed. When reviewing directional errors, anxiety was most detrimental to performance in the condition conveying only contextual information, suggesting that anxiety may have a greater impact on high-level (top-down) cognitive processes, potentially due to a shift in attentional control. Our findings provide partial support for ACT; anxiety elicited greater decrements in processing efficiency than performance effectiveness, possibly due to predominance of the stimulus-driven attentional system.

  12. The Effect of Flexible Pavement Mechanics on the Accuracy of Axle Load Sensors in Vehicle Weigh-in-Motion Systems

    PubMed Central

    Rys, Dawid

    2017-01-01

    Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy. PMID:28880215

  13. Should the model for risk-informed regulation be game theory rather than decision theory?

    PubMed

    Bier, Vicki M; Lin, Shi-Woei

    2013-02-01

    Risk analysts frequently view the regulation of risks as being largely a matter of decision theory. According to this view, risk analysis methods provide information on the likelihood and severity of various possible outcomes; this information should then be assessed using a decision-theoretic approach (such as cost/benefit analysis) to determine whether the risks are acceptable, and whether additional regulation is warranted. However, this view ignores the fact that in many industries (particularly industries that are technologically sophisticated and employ specialized risk and safety experts), risk analyses may be done by regulated firms, not by the regulator. Moreover, those firms may have more knowledge about the levels of safety at their own facilities than the regulator does. This creates a situation in which the regulated firm has both the opportunity-and often also the motive-to provide inaccurate (in particular, favorably biased) risk information to the regulator, and hence the regulator has reason to doubt the accuracy of the risk information provided by regulated parties. Researchers have argued that decision theory is capable of dealing with many such strategic interactions as well as game theory can. This is especially true in two-player, two-stage games in which the follower has a unique best strategy in response to the leader's strategy, as appears to be the case in the situation analyzed in this article. However, even in such cases, we agree with Cox that game-theoretic methods and concepts can still be useful. In particular, the tools of mechanism design, and especially the revelation principle, can simplify the analysis of such games because the revelation principle provides rigorous assurance that it is sufficient to analyze only games in which licensees truthfully report their risk levels, making the problem more manageable. Without that, it would generally be necessary to consider much more complicated forms of strategic behavior (including

  14. Effects of a risk-based online mammography intervention on accuracy of perceived risk and mammography intentions.

    PubMed

    Seitz, Holli H; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M; Armstrong, Katrina; Cappella, Joseph N

    2016-10-01

    This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. 2918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (<1.5%; ≥1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) x 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk ≤1.5%. For women with a risk≤1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Effects of a Risk-based Online Mammography Intervention on Accuracy of Perceived Risk and Mammography Intentions

    PubMed Central

    Seitz, Holli H.; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M.; Armstrong, Katrina; Cappella, Joseph N.

    2016-01-01

    Objective This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. Methods 2,918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (< 1.5%; ≥ 1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) × 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Results Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk <1.5%. For women with a risk < 1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. Conclusion A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Practice Implications Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. PMID:27178707

  16. Accuracy and reliability of peer assessment of athletic training psychomotor laboratory skills.

    PubMed

    Marty, Melissa C; Henning, Jolene M; Willse, John T

    2010-01-01

    Peer assessment is defined as students judging the level or quality of a fellow student's understanding. No researchers have yet demonstrated the accuracy or reliability of peer assessment in athletic training education. To determine the accuracy and reliability of peer assessment of athletic training students' psychomotor skills. Cross-sectional study. Entry-level master's athletic training education program. First-year (n  =  5) and second-year (n  =  8) students. Participants evaluated 10 videos of a peer performing 3 psychomotor skills (middle deltoid manual muscle test, Faber test, and Slocum drawer test) on 2 separate occasions using a valid assessment tool. Accuracy of each peer-assessment score was examined through percentage correct scores. We used a generalizability study to determine how reliable athletic training students were in assessing a peer performing the aforementioned skills. Decision studies using generalizability theory demonstrated how the peer-assessment scores were affected by the number of participants and number of occasions. Participants had a high percentage of correct scores: 96.84% for the middle deltoid manual muscle test, 94.83% for the Faber test, and 97.13% for the Slocum drawer test. They were not able to reliably assess a peer performing any of the psychomotor skills on only 1 occasion. However, the φ increased (exceeding the 0.70 minimal standard) when 2 participants assessed the skill on 3 occasions (φ  =  0.79) for the Faber test, with 1 participant on 2 occasions (φ  =  0.76) for the Slocum drawer test, and with 3 participants on 2 occasions for the middle deltoid manual muscle test (φ  =  0.72). Although students did not detect all errors, they assessed their peers with an average of 96% accuracy. Having only 1 student assess a peer performing certain psychomotor skills was less reliable than having more than 1 student assess those skills on more than 1 occasion. Peer assessment of psychomotor skills

  17. Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng

    2014-05-01

    Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.

  18. Theory of mind, inhibitory control, and preschool-age children's suggestibility in different interviewing contexts.

    PubMed

    Scullin, Matthew H; Bonner, Karri

    2006-02-01

    The current study examined the relations among 3- to 5-year-olds' theory of mind, inhibitory control, and three measures of suggestibility: yielding to suggestive questions (yield), shifting answers in response to negative feedback (shift), and accuracy in response to misleading questions during a pressured interview about a live event. Theory of mind aided in the prediction of suggestibility about the live event, and inhibitory control was a moderator variable affecting the consistency of children's sensitivity to social pressure across situations. The findings indicate that theory of mind and inhibitory control predict children's suggestibility about a live event above and beyond yield, shift, and age and that the construct validity of shift may improve as children's inhibitory control develops.

  19. a Protocol for High-Accuracy Theoretical Thermochemistry

    NASA Astrophysics Data System (ADS)

    Welch, Bradley; Dawes, Richard

    2017-06-01

    Theoretical studies of spectroscopy and reaction dynamics including the necessary development of potential energy surfaces rely on accurate thermochemical information. The Active Thermochemical Tables (ATcT) approach by Ruscic^{1} incorporates data for a large number of chemical species from a variety of sources (both experimental and theoretical) and derives a self-consistent network capable of making extremely accurate estimates of quantities such as temperature dependent enthalpies of formation. The network provides rigorous uncertainties, and since the values don't rely on a single measurement or calculation, the provenance of each quantity is also obtained. To expand and improve the network it is desirable to have a reliable protocol such as the HEAT approach^{2} for calculating accurate theoretical data. Here we present and benchmark an approach based on explicitly-correlated coupled-cluster theory and vibrational perturbation theory (VPT2). Methyldioxy and Methyl Hydroperoxide are important and well-characterized species in combustion processes and begin the family of (ethyl-, propyl-based, etc) similar compounds (much less is known about the larger members). Accurate anharmonic frequencies are essential to accurately describe even the 0 K enthalpies of formation, but are especially important for finite temperature studies. Here we benchmark the spectroscopic and thermochemical accuracy of the approach, comparing with available data for the smallest systems, and comment on the outlook for larger systems that are less well-known and characterized. ^{1}B. Ruscic, Active Thermochemical Tables (ATcT) values based on ver. 1.118 of the Thermochemical Network (2015); available at ATcT.anl.gov ^{2}A. Tajti, P. G. Szalay, A. G. Császár, M. Kállay, J. Gauss, E. F. Valeev, B. A. Flowers, J. Vázquez, and J. F. Stanton. JCP 121, (2004): 11599.

  20. Improving coding accuracy in an academic practice.

    PubMed

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  1. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  2. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  3. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    NASA Astrophysics Data System (ADS)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  4. Modelling of thick composites using a layerwise laminate theory

    NASA Technical Reports Server (NTRS)

    Robbins, D. H., Jr.; Reddy, J. N.

    1993-01-01

    The layerwise laminate theory of Reddy (1987) is used to develop a layerwise, two-dimensional, displacement-based, finite element model of laminated composite plates that assumes a piecewise continuous distribution of the tranverse strains through the laminate thickness. The resulting layerwise finite element model is capable of computing interlaminar stresses and other localized effects with the same level of accuracy as a conventional 3D finite element model. Although the total number of degrees of freedom are comparable in both models, the layerwise model maintains a 2D-type data structure that provides several advantages over a conventional 3D finite element model, e.g. simplified input data, ease of mesh alteration, and faster element stiffness matrix formulation. Two sample problems are provided to illustrate the accuracy of the present model in computing interlaminar stresses for laminates in bending and extension.

  5. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-03-18

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.

  6. Low-energy effective field theory below the electroweak scale: operators and matching

    NASA Astrophysics Data System (ADS)

    Jenkins, Elizabeth E.; Manohar, Aneesh V.; Stoffer, Peter

    2018-03-01

    The gauge-invariant operators up to dimension six in the low-energy effective field theory below the electroweak scale are classified. There are 70 Hermitian dimension-five and 3631 Hermitian dimension-six operators that conserve baryon and lepton number, as well as Δ B = ±Δ L = ±1, Δ L = ±2, and Δ L = ±4 operators. The matching onto these operators from the Standard Model Effective Field Theory (SMEFT) up to order 1 /Λ2 is computed at tree level. SMEFT imposes constraints on the coefficients of the low-energy effective theory, which can be checked experimentally to determine whether the electroweak gauge symmetry is broken by a single fundamental scalar doublet as in SMEFT. Our results, when combined with the one-loop anomalous dimensions of the low-energy theory and the one-loop anomalous dimensions of SMEFT, allow one to compute the low-energy implications of new physics to leading-log accuracy, and combine them consistently with high-energy LHC constraints.

  7. Improved Statistical Sampling and Accuracy with Accelerated Molecular Dynamics on Rotatable Torsions.

    PubMed

    Doshi, Urmi; Hamelberg, Donald

    2012-11-13

    In enhanced sampling techniques, the precision of the reweighted ensemble properties is often decreased due to large variation in statistical weights and reduction in the effective sampling size. To abate this reweighting problem, here, we propose a general accelerated molecular dynamics (aMD) approach in which only the rotatable dihedrals are subjected to aMD (RaMD), unlike the typical implementation wherein all dihedrals are boosted (all-aMD). Nonrotatable and improper dihedrals are marginally important to conformational changes or the different rotameric states. Not accelerating them avoids the sharp increases in the potential energies due to small deviations from their minimum energy conformations and leads to improvement in the precision of RaMD. We present benchmark studies on two model dipeptides, Ace-Ala-Nme and Ace-Trp-Nme, simulated with normal MD, all-aMD, and RaMD. We carry out a systematic comparison between the performances of both forms of aMD using a theory that allows quantitative estimation of the effective number of sampled points and the associated uncertainty. Our results indicate that, for the same level of acceleration and simulation length, as used in all-aMD, RaMD results in significantly less loss in the effective sample size and, hence, increased accuracy in the sampling of φ-ψ space. RaMD yields an accuracy comparable to that of all-aMD, from simulation lengths 5 to 1000 times shorter, depending on the peptide and the acceleration level. Such improvement in speed and accuracy over all-aMD is highly remarkable, suggesting RaMD as a promising method for sampling larger biomolecules.

  8. Atomic Theory and Multiple Combining Proportions: The Search for Whole Number Ratios.

    PubMed

    Usselman, Melvyn C; Brown, Todd A

    2015-04-01

    John Dalton's atomic theory, with its postulate of compound formation through atom-to-atom combination, brought a new perspective to weight relationships in chemical reactions. A presumed one-to-one combination of atoms A and B to form a simple compound AB allowed Dalton to construct his first table of relative atomic weights from literature analyses of appropriate binary compounds. For such simple binary compounds, the atomic theory had little advantages over affinity theory as an explanation of fixed proportions by weight. For ternary compounds of the form AB2, however, atomic theory made quantitative predictions that were not deducible from affinity theory. Atomic theory required that the weight of B in the compound AB2 be exactly twice that in the compound AB. Dalton, Thomas Thomson and William Hyde Wollaston all published within a few years of each other experimental data that claimed to give the predicted results with the required accuracy. There are nonetheless several experimental barriers to obtaining the desired integral multiple proportions. In this paper I will discuss replication experiments which demonstrate that only Wollaston's results are experimentally reliable. It is likely that such replicability explains why Wollaston's experiments were so influential.

  9. Coarse-grained density functional theories for metallic alloys: Generalized coherent-potential approximations and charge-excess functional theory

    NASA Astrophysics Data System (ADS)

    Bruno, Ezio; Mammano, Francesco; Fiorino, Antonino; Morabito, Emanuela V.

    2008-04-01

    The class of the generalized coherent-potential approximations (GCPAs) to the density functional theory (DFT) is introduced within the multiple scattering theory formalism with the aim of dealing with ordered or disordered metallic alloys. All GCPA theories are based on a common ansatz for the kinetic part of the Hohenberg-Kohn functional and each theory of the class is specified by an external model concerning the potential reconstruction. Most existing DFT implementations of CPA-based theories belong to the GCPA class. The analysis of the formal properties of the density functional defined by GCPA theories shows that it consists of marginally coupled local contributions. Furthermore, it is shown that the GCPA functional does not depend on the details of the charge density and that it can be exactly rewritten as a function of the appropriate charge multipole moments to be associated with each lattice site. A general procedure based on the integration of the qV laws is described that allows for the explicit construction of the same function. The coarse-grained nature of the GCPA density functional implies a great deal of computational advantages and is connected with the O(N) scalability of GCPA algorithms. Moreover, it is shown that a convenient truncated series expansion of the GCPA functional leads to the charge-excess functional (CEF) theory [E. Bruno , Phys. Rev. Lett. 91, 166401 (2003)], which here is offered in a generalized version that includes multipolar interactions. CEF and the GCPA numerical results are compared with status of art linearized augmented plane wave (LAPW) full-potential density functional calculations for 62 bcc- and fcc-based ordered CuZn alloys, in all the range of concentrations. Two facts clearly emerge from these extensive tests. In the first place, the discrepancies between GCPA and CEF results are always within the numerical accuracy of the calculations, both for the site charges and the total energies. In the second place, the

  10. Density functional theory: Foundations reviewed

    NASA Astrophysics Data System (ADS)

    Kryachko, Eugene S.; Ludeña, Eduardo V.

    2014-11-01

    -geared functionals. These problems are discussed by making reference to ab initio DFT as well as to the local-scaling-transformation version of DFT, LS-DFT. In addition, we examine the question of the accuracy of approximate exchange-correlation functionals in the light of their non-observance of the variational principle. Why do approximate functionals yield reasonable (and accurate) descriptions of many molecular and condensed matter properties? Are the conditions imposed on exchange and correlation functionals sufficiently adequate to produce accurate semi-empirical functionals? In this respect, we consider the question of whether the results reflect a true approach to chemical accuracy or are just the outcome of a virtuoso-like performance which cannot be systematically improved. We discuss the issue of the accuracy of the contemporary DFT results by contrasting them to those obtained by the alternative RDMT and NOFT. We discuss the possibility of improving DFT functionals by applying in a systematic way the N-representability conditions on the 2-RDM. In this respect, we emphasize the possibility of constructing 2-matrices in the context of the local scaling transformation version of DFT to which the N-representability condition of RDM theory may be applied. We end up our revision of HKS-DFT by considering some of the problems related to spin symmetry and discuss some current issues dealing with a proper treatment of open-shell systems. We are particularly concerned, as in the rest of this paper, mostly with foundational issues arising in the construction of functionals. We dedicate the whole Section 4 to the local-scaling transformation version of density functional theory, LS-DFT. The reason is that in this theory some of the fundamental problems that appear in HKS-DFT, have been solved. For example, in LS-DFT the functionals are, in principle, designed to fulfill v- and N-representability conditions from the outset. This is possible because LS-DFT is based on density

  11. Thematic Accuracy Assessment of the 2011 National Land ...

    EPA Pesticide Factsheets

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l

  12. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  13. High accuracy in short ISS missions

    NASA Astrophysics Data System (ADS)

    Rüeger, J. M.

    1986-06-01

    Traditionally Inertial Surveying Systems ( ISS) are used for missions of 30 km to 100 km length. Today, a new type of ISS application is emanating from an increased need for survey control densification in urban areas often in connection with land information systems or cadastral surveys. The accuracy requirements of urban surveys are usually high. The loss in accuracy caused by the coordinate transfer between IMU and ground marks is investigated and an offsetting system based on electronic tacheometers is proposed. An offsetting system based on a Hewlett-Packard HP 3820A electronic tacheometer has been tested in Sydney (Australia) in connection with a vehicle mounted LITTON Auto-Surveyor System II. On missions over 750 m ( 8 stations, 25 minutes duration, 3.5 minute ZUPT intervals, mean offset distances 9 metres) accuracies of 37 mm (one sigma) in position and 8 mm in elevation were achieved. Some improvements to the LITTON Auto-Surveyor System II are suggested which would improve the accuracies even further.

  14. Accuracy testing of electric groundwater-level measurement tapes

    USGS Publications Warehouse

    Jelinski, Jim; Clayton, Christopher S.; Fulford, Janice M.

    2015-01-01

    The accuracy tests demonstrated that none of the electric-tape models tested consistently met the suggested USGS accuracy of ±0.01 ft. The test data show that the tape models in the study should give a water-level measurement that is accurate to roughly ±0.05 ft per 100 ft without additional calibration. To meet USGS accuracy guidelines, the electric-tape models tested will need to be individually calibrated. Specific conductance also plays a part in tape accuracy. The probes will not work in water with specific conductance values near zero, and the accuracy of one probe was unreliable in very high conductivity water (10,000 microsiemens per centimeter).

  15. Boundary layers in centrifugal compressors. [application of boundary layer theory to compressor design

    NASA Technical Reports Server (NTRS)

    Dean, R. C., Jr.

    1974-01-01

    The utility of boundary-layer theory in the design of centrifugal compressors is demonstrated. Boundary-layer development in the diffuser entry region is shown to be important to stage efficiency. The result of an earnest attempt to analyze this boundary layer with the best tools available is displayed. Acceptable prediction accuracy was not achieved. The inaccuracy of boundary-layer analysis in this case would result in stage efficiency prediction as much as four points low. Fluid dynamic reasons for analysis failure are discussed with support from flow data. Empirical correlations used today to circumnavigate the weakness of the theory are illustrated.

  16. Theory-Agnostic Constraints on Black-Hole Dipole Radiation with Multiband Gravitational-Wave Astrophysics.

    PubMed

    Barausse, Enrico; Yunes, Nicolás; Chamberlain, Katie

    2016-06-17

    The aLIGO detection of the black-hole binary GW150914 opens a new era for probing extreme gravity. Many gravity theories predict the emission of dipole gravitational radiation by binaries. This is excluded to high accuracy in binary pulsars, but entire classes of theories predict this effect predominantly (or only) in binaries involving black holes. Joint observations of GW150914-like systems by aLIGO and eLISA will improve bounds on dipole emission from black-hole binaries by 6 orders of magnitude relative to current constraints, provided that eLISA is not dramatically descoped.

  17. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-10-09

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  18. The two and three-loop matter bispectrum in perturbation theories

    NASA Astrophysics Data System (ADS)

    Lazanu, Andrei; Liguori, Michele

    2018-04-01

    We evaluate for the first time the dark matter bispectrum of large-scale structure at two loops in the Standard Perturbation Theory and at three loops in the Renormalised Perturbation Theory (MPTBREEZE formalism), removing in each case the leading divergences in the integrals in order to make them infrared-safe. We show that the Standard Perturbation Theory at two loops can be employed to model the matter bispectrum further into the quasi-nonlinear regime compared to the one loop, up to kmax ~ 0.1 h/Mpc at z = 0, but without reaching a high level of accuracy. In the case of the MPTBREEZE method, we show that its bispectra decay at smaller and smaller scales with increasing loop order, but with smaller improvements decreases with loop order. At three loops, this model predicts the bispectrum accurately up to scales kmax ~ 0.17 h/Mpc at z = 0 and kmax ~ 0.24 h/Mpc at z = 1.

  19. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals.

    PubMed

    Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  20. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C.; Hine, N. D. M.

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on amore » small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.« less

  1. You are so beautiful... to me: seeing beyond biases and achieving accuracy in romantic relationships.

    PubMed

    Solomon, Brittany C; Vazire, Simine

    2014-09-01

    Do romantic partners see each other realistically, or do they have overly positive perceptions of each other? Research has shown that realism and positivity co-exist in romantic partners' perceptions (Boyes & Fletcher, 2007). The current study takes a novel approach to explaining this seemingly paradoxical effect when it comes to physical attractiveness--a highly evaluative trait that is especially relevant to romantic relationships. Specifically, we argue that people are aware that others do not see their partners as positively as they do. Using both mean differences and correlational approaches, we test the hypothesis that despite their own biased and idiosyncratic perceptions, people have 2 types of partner-knowledge: insight into how their partners see themselves (i.e., identity accuracy) and insight into how others see their partners (i.e., reputation accuracy). Our results suggest that romantic partners have some awareness of each other's identity and reputation for physical attractiveness, supporting theories that couple members' perceptions are driven by motives to fulfill both esteem- and epistemic-related needs (i.e., to see their partners positively and realistically). 2014 APA, all rights reserved

  2. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  3. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  4. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  5. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  6. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  7. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...

  8. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  9. Hydraulic geometry of river cross sections; theory of minimum variance

    USGS Publications Warehouse

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  10. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  11. Developing a weighted measure of speech sound accuracy.

    PubMed

    Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J

    2011-02-01

    To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.

  12. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.

    2000-01-01

    Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

  13. AN EDUCATIONAL THEORY MODEL--(SIGGS), AN INTEGRATION OF SET THEORY, INFORMATION THEORY, AND GRAPH THEORY WITH GENERAL SYSTEMS THEORY.

    ERIC Educational Resources Information Center

    MACCIA, ELIZABETH S.; AND OTHERS

    AN ANNOTATED BIBLIOGRAPHY OF 20 ITEMS AND A DISCUSSION OF ITS SIGNIFICANCE WAS PRESENTED TO DESCRIBE CURRENT UTILIZATION OF SUBJECT THEORIES IN THE CONSTRUCTION OF AN EDUCATIONAL THEORY. ALSO, A THEORY MODEL WAS USED TO DEMONSTRATE CONSTRUCTION OF A SCIENTIFIC EDUCATIONAL THEORY. THE THEORY MODEL INCORPORATED SET THEORY (S), INFORMATION THEORY…

  14. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  15. Group Sequential Testing of the Predictive Accuracy of a Continuous Biomarker with Unknown Prevalence

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2015-01-01

    Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180

  16. A note on the accuracy of KS-DFT densities

    NASA Astrophysics Data System (ADS)

    Ranasinghe, Duminda S.; Perera, Ajith; Bartlett, Rodney J.

    2017-11-01

    The accuracy of the density of wave function methods and Kohn-Sham (KS) density functionals is studied using moments of the density, ⟨rn ⟩ =∫ ρ (r )rnd τ =∫0∞4 π r2ρ (r ) rnd r ,where n =-1 ,-2,0,1,2 ,and 3 provides information about the short- and long-range behavior of the density. Coupled cluster (CC) singles, doubles, and perturbative triples (CCSD(T)) is considered as the reference density. Three test sets are considered: boron through neon neutral atoms, two and four electron cations, and 3d transition metals. The total density and valence only density are distinguished by dropping appropriate core orbitals. Among density functionals tested, CAMQTP00 and ωB97x show the least deviation for boron through neon neutral atoms. They also show accurate eigenvalues for the HOMO indicating that they should have a more correct long-range behavior for the density. For transition metals, some density functional approximations outperform some wave function methods, suggesting that the KS determinant could be a better starting point for some kinds of correlated calculations. By using generalized many-body perturbation theory (MBPT), the convergence of second-, third-, and fourth-order KS-MBPT for the density is addressed as it converges to the infinite-order coupled cluster result. For the transition metal test set, the deviations in the KS density functional theory methods depend on the amount of exact exchange the functional uses. Functionals with exact exchange close to 25% show smaller deviations from the CCSD(T) density.

  17. Accuracy of Carbohydrate Counting in Adults

    PubMed Central

    Rushton, Wanda E.

    2016-01-01

    In Brief This study investigates carbohydrate counting accuracy in patients using insulin through a multiple daily injection regimen or continuous subcutaneous insulin infusion. The average accuracy test score for all patients was 59%. The carbohydrate test in this study can be used to emphasize the importance of carbohydrate counting to patients and to provide ongoing education. PMID:27621531

  18. Invasive advance of an advantageous mutation: nucleation theory.

    PubMed

    O'Malley, Lauren; Basham, James; Yasi, Joseph A; Korniss, G; Allstadt, Andrew; Caraco, Thomas

    2006-12-01

    For sedentary organisms with localized reproduction, spatially clustered growth drives the invasive advance of a favorable mutation. We model competition between two alleles where recurrent mutation introduces a genotype with a rate of local propagation exceeding the resident's rate. We capture ecologically important properties of the rare invader's stochastic dynamics by assuming discrete individuals and local neighborhood interactions. To understand how individual-level processes may govern population patterns, we invoke the physical theory for nucleation of spatial systems. Nucleation theory discriminates between single-cluster and multi-cluster dynamics. A sufficiently low mutation rate, or a sufficiently small environment, generates single-cluster dynamics, an inherently stochastic process; a favorable mutation advances only if the invader cluster reaches a critical radius. For this mode of invasion, we identify the probability distribution of waiting times until the favored allele advances to competitive dominance, and we ask how the critical cluster size varies as propagation or mortality rates vary. Increasing the mutation rate or system size generates multi-cluster invasion, where spatial averaging produces nearly deterministic global dynamics. For this process, an analytical approximation from nucleation theory, called Avrami's Law, describes the time-dependent behavior of the genotype densities with remarkable accuracy.

  19. Critical thinking and accuracy of nurses' diagnoses.

    PubMed

    Lunney, Margaret

    2003-01-01

    Interpretations of patient data are complex and diverse, contributing to a risk of low accuracy nursing diagnoses. This risk is confirmed in research findings that accuracy of nurses' diagnoses varied widely from high to low. Highly accurate diagnoses are essential, however, to guide nursing interventions for the achievement of positive health outcomes. Development of critical thinking abilities is likely to improve accuracy of nurses' diagnoses. New views of critical thinking serve as a basis for critical thinking in nursing. Seven cognitive skills and ten habits of mind are identified as dimensions of critical thinking for use in the diagnostic process. Application of the cognitive skills of critical thinking illustrates the importance of using critical thinking for accuracy of nurses' diagnoses. Ten strategies are proposed for self-development of critical thinking abilities.

  20. The Effects of Alcohol Intoxication on Accuracy and the Confidence–Accuracy Relationship in Photographic Simultaneous Line‐ups

    PubMed Central

    Colloff, Melissa F.; Karoğlu, Nilda; Zelek, Katarzyna; Ryder, Hannah; Humphries, Joyce E.; Takarangi, Melanie K.T.

    2017-01-01

    Summary Acute alcohol intoxication during encoding can impair subsequent identification accuracy, but results across studies have been inconsistent, with studies often finding no effect. Little is also known about how alcohol intoxication affects the identification confidence–accuracy relationship. We randomly assigned women (N = 153) to consume alcohol (dosed to achieve a 0.08% blood alcohol content) or tonic water, controlling for alcohol expectancy. Women then participated in an interactive hypothetical sexual assault scenario and, 24 hours or 7 days later, attempted to identify the assailant from a perpetrator present or a perpetrator absent simultaneous line‐up and reported their decision confidence. Overall, levels of identification accuracy were similar across the alcohol and tonic water groups. However, women who had consumed tonic water as opposed to alcohol identified the assailant with higher confidence on average. Further, calibration analyses suggested that confidence is predictive of accuracy regardless of alcohol consumption. The theoretical and applied implications of our results are discussed.© 2017 The Authors Applied Cognitive Psychology Published by John Wiley & Sons Ltd. PMID:28781426

  1. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  2. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  3. An analysis of approach navigation accuracy and guidance requirements for the grand tour mission to the outer planets

    NASA Technical Reports Server (NTRS)

    Jones, D. W.

    1971-01-01

    The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.

  4. Classical and non-classical effective medium theories: New perspectives

    NASA Astrophysics Data System (ADS)

    Tsukerman, Igor

    2017-05-01

    Future research in electrodynamics of periodic electromagnetic composites (metamaterials) can be expected to produce sophisticated homogenization theories valid for any composition and size of the lattice cell. The paper outlines a promising path in that direction, leading to non-asymptotic and nonlocal homogenization models, and highlights aspects of homogenization that are often overlooked: the finite size of the sample and the role of interface boundaries. Classical theories (e.g. Clausius-Mossotti, Maxwell Garnett), while originally derived from a very different set of ideas, fit well into the proposed framework. Nonlocal effects can be included in the model, making an order-of-magnitude accuracy improvements possible. One future challenge is to determine what effective parameters can or cannot be obtained for a given set of constituents of a metamaterial lattice cell, thereby delineating the possible from the impossible in metamaterial design.

  5. How the Laser Helped to Improve the Test of Special Theory of Relativity?

    ERIC Educational Resources Information Center

    Singh, Satya Pal

    2013-01-01

    In this paper of I have reviewed the test done for validating the special theory of relativity using masers and lasers in last one century. Michelson-Morley did the first experimental verification for the isotropy of space for the propagation of light in 1887. It has an accuracy of 1/100th of a fringe shift. The predicted fringe shift on the basis…

  6. Assessment of G3(MP2)//B3 theory including a pseudopotential for molecules containing first-, second-, and third-row representative elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocha, Carlos Murilo Romero; Morgon, Nelson Henrique; Custodio, Rogério, E-mail: roger@iqm.unicamp.br

    2013-11-14

    G3(MP2)//B3 theory was modified to incorporate compact effective potential (CEP) pseudopotentials, providing a theoretical alternative referred to as G3(MP2)//B3-CEP for calculations involving first-, second-, and third-row representative elements. The G3/05 test set was used as a standard to evaluate the accuracy of the calculated properties. G3(MP2)//B3-CEP theory was applied to the study of 247 standard enthalpies of formation, 104 ionization energies, 63 electron affinities, 10 proton affinities, and 22 atomization energies, comprising 446 experimental energies. The mean absolute deviations compared with the experimental data for all thermochemical results presented an accuracy of 1.4 kcal mol{sup −1} for G3(MP2)//B3 and 1.6more » kcal mol{sup −1} for G3(MP2)//B3-CEP. Approximately 75% and 70% of the calculated properties are found with accuracy between ±2 kcal mol{sup −1} for G3(MP2)//B3 and G3(MP2)//B3-CEP, respectively. Considering a confidence interval of 95%, the results may oscillate between ±4.2 kcal mol{sup −1} and ±4.6 kcal mol{sup −1}, respectively. The overall statistical behavior indicates that the calculations using pseudopotential present similar behavior with the all-electron theory. Of equal importance to the accuracy is the CPU time, which was reduced by between 10% and 40%.« less

  7. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  8. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  9. Cosmic microwave background theory

    PubMed Central

    Bond, J. Richard

    1998-01-01

    A long-standing goal of theorists has been to constrain cosmological parameters that define the structure formation theory from cosmic microwave background (CMB) anisotropy experiments and large-scale structure (LSS) observations. The status and future promise of this enterprise is described. Current band-powers in ℓ-space are consistent with a ΔT flat in frequency and broadly follow inflation-based expectations. That the levels are ∼(10−5)2 provides strong support for the gravitational instability theory, while the Far Infrared Absolute Spectrophotometer (FIRAS) constraints on energy injection rule out cosmic explosions as a dominant source of LSS. Band-powers at ℓ ≳ 100 suggest that the universe could not have re-ionized too early. To get the LSS of Cosmic Background Explorer (COBE)-normalized fluctuations right provides encouraging support that the initial fluctuation spectrum was not far off the scale invariant form that inflation models prefer: e.g., for tilted Λ cold dark matter sequences of fixed 13-Gyr age (with the Hubble constant H0 marginalized), ns = 1.17 ± 0.3 for Differential Microwave Radiometer (DMR) only; 1.15 ± 0.08 for DMR plus the SK95 experiment; 1.00 ± 0.04 for DMR plus all smaller angle experiments; 1.00 ± 0.05 when LSS constraints are included as well. The CMB alone currently gives weak constraints on Λ and moderate constraints on Ωtot, but theoretical forecasts of future long duration balloon and satellite experiments are shown which predict percent-level accuracy among a large fraction of the 10+ parameters characterizing the cosmic structure formation theory, at least if it is an inflation variant. PMID:9419321

  10. Decision Accuracy in Computer-Mediated versus Face-to-Face Decision-Making Teams.

    PubMed

    Hedlund; Ilgen; Hollenbeck

    1998-10-01

    Changes in the way organizations are structured and advances in communication technologies are two factors that have altered the conditions under which group decisions are made. Decisions are increasingly made by teams that have a hierarchical structure and whose members have different areas of expertise. In addition, many decisions are no longer made via strictly face-to-face interaction. The present study examines the effects of two modes of communication (face-to-face or computer-mediated) on the accuracy of teams' decisions. The teams are characterized by a hierarchical structure and their members differ in expertise consistent with the framework outlined in the Multilevel Theory of team decision making presented by Hollenbeck, Ilgen, Sego, Hedlund, Major, and Phillips (1995). Sixty-four four-person teams worked for 3 h on a computer simulation interacting either face-to-face (FtF) or over a computer network. The communication mode had mixed effects on team processes in that members of FtF teams were better informed and made recommendations that were more predictive of the correct team decision, but leaders of CM teams were better able to differentiate staff members on the quality of their decisions. Controlling for the negative impact of FtF communication on staff member differentiation increased the beneficial effect of the FtF mode on overall decision making accuracy. Copyright 1998 Academic Press.

  11. A well-scaling natural orbital theory

    DOE PAGES

    Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto

    2016-11-01

    Here, we introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals ofmore » the oneparticle density matrix.« less

  12. A well-scaling natural orbital theory

    PubMed Central

    Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto

    2016-01-01

    We introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals of the one-particle density matrix. PMID:27803328

  13. High Accuracy Transistor Compact Model Calibrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less

  14. Design of Neural Networks for Fast Convergence and Accuracy

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1998-01-01

    A novel procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed to provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component spacecraft design changes and measures of its performance. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The design algorithm attempts to avoid the local minima phenomenon that hampers the traditional network training. A numerical example is performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  15. Modeling Sediment Detention Ponds Using Reactor Theory and Advection-Diffusion Concepts

    NASA Astrophysics Data System (ADS)

    Wilson, Bruce N.; Barfield, Billy J.

    1985-04-01

    An algorithm is presented to model the sedimentation process in detention ponds. This algorithm is based on a mass balance for an infinitesimal layer that couples reactor theory concepts with advection-diffusion processes. Reactor theory concepts are used to (1) determine residence time of sediment particles and to (2) mix influent sediment with previously stored flow. Advection-diffusion processes are used to model the (1) settling characteristics of sediment and the (2) vertical diffusion of sediment due to turbulence. Predicted results of the model are compared to those observed on two pilot scale ponds for a total of 12 runs. The average percent error between predicted and observed trap efficiency was 5.2%. Overall, the observed sedimentology values were predicted with reasonable accuracy.

  16. Voting for image scoring and assessment (VISA)--theory and application of a 2 + 1 reader algorithm to improve accuracy of imaging endpoints in clinical trials.

    PubMed

    Gottlieb, Klaus; Hussain, Fez

    2015-02-19

    Independent central reading or off-site reading of imaging endpoints is increasingly used in clinical trials. Clinician-reported outcomes, such as endoscopic disease activity scores, have been shown to be subject to bias and random error. Central reading attempts to limit bias and improve accuracy of the assessment, two factors that are critical to trial success. Whether one central reader is sufficient and how to best integrate the input of more than one central reader into one output measure, is currently not known.In this concept paper we develop the theoretical foundations of a reading algorithm that can achieve both objectives without jeopardizing operational efficiency We examine the role of expert versus competent reader, frame scoring of imaging as a classification task, and propose a voting algorithm (VISA: Voting for Image Scoring and Assessment) as the most appropriate solution which could also be used to operationally define imaging gold standards. We propose two image readers plus an optional third reader in cases of disagreement (2 + 1) for ordinary scoring tasks. We argue that it is critical in trials with endoscopically determined endpoints to include the score determined by the site reader, at least in endoscopy clinical trials. Juries with more than 3 readers could define a reference standard that would allow a transition from measuring reader agreement to measuring reader accuracy. We support VISA by applying concepts from engineering (triple-modular redundancy) and voting theory (Condorcet's jury theorem) and illustrate our points with examples from inflammatory bowel disease trials, specifically, the endoscopy component of the Mayo Clinic Score of ulcerative colitis disease activity. Detailed flow-diagrams (pseudo-code) are provided that can inform program design.The VISA "2 + 1" reading algorithm, based on voting, can translate individual reader scores into a final score in a fashion that is both mathematically sound (by avoiding

  17. THE USE OF THE LIGASURE™ DEVICE FOR SCROTAL ABLATION IN MARSUPIALS.

    PubMed

    Cusack, Lara; Cutler, Daniel; Mayer, Joerg

    2017-03-01

    Five sugar gliders ( Petaurus breviceps ), ranging in age from 3 mo to 3.5 yr of age, and one opossum ( Didelphis virginianus ), age 4.5 mo, presented for elective orchiectomy and scrotal ablation. The LigaSure™ device was safely used for orchiectomy and scrotal ablation in both species. Surgical time with the LigaSure was approximately 4 sec. No grooming of the incision site or self-mutilation was seen in the first 72 hr postoperatively. One sugar glider required postoperative wound care approximately 10 days postoperatively following incision-site grooming by a conspecific. The LigaSure provides a rapid, technologically simple and safe surgical technique for scrotal ablation and orchiectomy in the marsupial patient that minimizes surgical, anesthetic, and recovery times.

  18. On the accuracy of personality judgment: a realistic approach.

    PubMed

    Funder, D C

    1995-10-01

    The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.

  19. Displacement Theories for In-Flight Deformed Shape Predictions of Aerospace Structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Richards, W. L.; Tran, Van t.

    2007-01-01

    Displacement theories are developed for a variety of structures with the goal of providing real-time shape predictions for aerospace vehicles during flight. These theories are initially developed for a cantilever beam to predict the deformed shapes of the Helios flying wing. The main structural configuration of the Helios wing is a cantilever wing tubular spar subjected to bending, torsion, and combined bending and torsion loading. The displacement equations that are formulated are expressed in terms of strains measured at multiple sensing stations equally spaced on the surface of the wing spar. Displacement theories for other structures, such as tapered cantilever beams, two-point supported beams, wing boxes, and plates also are developed. The accuracy of the displacement theories is successfully validated by finite-element analysis and classical beam theory using input-strains generated by finite-element analysis. The displacement equations and associated strain-sensing system (such as fiber optic sensors) create a powerful means for in-flight deformation monitoring of aerospace structures. This method serves multiple purposes for structural shape sensing, loads monitoring, and structural health monitoring. Ultimately, the calculated displacement data can be visually displayed to the ground-based pilot or used as input to the control system to actively control the shape of structures during flight.

  20. Robust global identifiability theory using potentials--Application to compartmental models.

    PubMed

    Wongvanich, N; Hann, C E; Sirisena, H R

    2015-04-01

    This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Non-empirical Prediction of the Photophysical and Magnetic Properties of Systems with Open d- and f-Shells Based on Combined Ligand Field and Density Functional Theory (LFDFT).

    PubMed

    Daul, Claude

    2014-09-01

    Despite the important growth of ab initio and computational techniques, ligand field theory in molecular science or crystal field theory in condensed matter offers the most intuitive way to calculate multiplet energy levels arising from systems with open shells d and/or f electrons. Over the past decade we have developed a ligand field treatment of inorganic molecular modelling taking advantage of the dominant localization of the frontier orbitals within the metal-sphere. This feature, which is observed in any inorganic coordination compound, especially if treated by Density Functional Theory calculation, allows the determination of the electronic structure and properties with a surprising good accuracy. In ligand field theory, the theoretical concepts consider only a single atom center; and treat its interaction with the chemical environment essentially as a perturbation. Therefore success in the simple ligand field theory is no longer questionable, while the more accurate molecular orbital theory does in general over-estimate the metal-ligand covalence, thus yields wave functions that are too delocalized. Although LF theory has always been popular as a semi-empirical method when dealing with molecules of high symmetry e.g. cubic symmetry where the number of parameters needed is reasonably small (3 or 5), this is no more the case for molecules without symmetry and involving both an open d- and f-shell (# parameters ∼90). However, the combination of LF theory and Density Functional (DF) theory that we introduced twenty years ago can easily deal with complex molecules of any symmetry with two and more open shells. The accuracy of these predictions from 1(st) principles achieves quite a high accuracy (<5%) in terms of states energies. Hence, this approach is well suited to predict the magnetic and photo-physical properties arbitrary molecules and materials prior to their synthesis, which is the ultimate goal of each computational chemist. We will illustrate the

  2. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  3. Theory of mind in children with traumatic brain injury.

    PubMed

    Dennis, Maureen; Simic, Nevena; Gerry Taylor, H; Bigler, Erin D; Rubin, Kenneth; Vannatta, Kathryn; Gerhardt, Cynthia A; Stancin, Terry; Roncadin, Caroline; Yeates, Keith Owen

    2012-09-01

    Theory of mind (ToM) involves thinking about mental states and intentions to understand what other people know and to predict how they will act. We studied ToM in children with traumatic brain injury (TBI) and age- and gender-matched children with orthopedic injuries (OI), using a new three-frame Jack and Jill cartoon task that measures intentional thinking separate from contingent task demands. In the key ToM trials, which required intentional thinking, Jack switched a black ball from one hat to another of a different color, but Jill did not witness the switch; in the otherwise identical non-ToM trials, the switch was witnessed. Overall accuracy was higher in children with OI than in those with TBI. Children with severe TBI showed a larger decline in accuracy on ToM trials, suggesting a specific deficit in ToM among children with severe TBI. Accuracy was significantly higher on trials following errors than on trials following correct responses, suggesting that all groups monitored performance and responded to errors with increased vigilance. TBI is associated with poorer intentional processing in school-age children and adolescents relative to peers with OI; furthermore, children with TBI are challenged specifically by intentional demands, especially when their injury is severe. (JINS, 2012, 19, 1-9).

  4. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  5. The Data-Ink Ratio and Accuracy of Information Derived from Newspaper Graphs: An Experimental Test of the Theory.

    ERIC Educational Resources Information Center

    Kelly, James D.

    A study tested the data-ink ratio theory, which holds that a reader's recall of quantitative data displayed in a graph containing a substantial amount of non-data-ink will be significantly less than recall from a graph containing little non-data-ink, as it might apply to graphics used in mass circulation newspapers. The experiment employed a…

  6. A method which can enhance the optical-centering accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua

    2014-09-01

    Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.

  7. Accuracy Performance Evaluation of Beidou Navigation Satellite System

    NASA Astrophysics Data System (ADS)

    Wang, W.; Hu, Y. N.

    2017-03-01

    Accuracy is one of the key elements of the regional Beidou Navigation Satellite System (BDS) performance standard. In this paper, we review the definition specification and evaluation standard of the BDS accuracy. Current accuracy of the regional BDS is analyzed through the ground measurements and compared with GPS in terms of dilution of precision (DOP), signal-in-space user range error (SIS URE), and positioning accuracy. The Positioning DOP (PDOP) map of BDS around Chinese mainland is compared with that of GPS. The GPS PDOP is between 1.0-2.0 and does not vary with the user latitude and longitude, while the BDS PDOP varies between 1.5-5.0, and increases as the user latitude increases, and as the user longitude apart from 118°. The accuracies of the broadcast orbits of BDS are assessed by taking the precise orbits from International GNSS Service (IGS) as the reference, and by making satellite laser ranging (SLR) residuals. The radial errors of the BDS inclined geosynchronous orbit (IGSO) and medium orbit (MEO) satellites broadcast orbits are at the 0.5m level, which are larger than those of GPS satellites at the 0.2m level. The SLR residuals of geosynchronous orbit (GEO) satellites are 65.0cm, which are larger than those of IGSO, and MEO satellites, at the 50.0cm level. The accuracy of broadcast clock offset parameters of BDS is computed by taking the clock measurements of Two-way Satellite Radio Time Frequency Transfer as the reference. Affected by the age of broadcast clock parameters, the error of the broadcast clock offset parameters of the MEO satellites is the largest, at the 0.80m level. Finally, measurements of the multi-GNSS (MGEX) receivers are used for positioning accuracy assessment of BDS and GPS. It is concluded that the positioning accuracy of regional BDS is better than 10m at the horizontal component and the vertical component. The combined positioning accuracy of both systems is better than one specific system.

  8. [Prediction of regional soil quality based on mutual information theory integrated with decision tree algorithm].

    PubMed

    Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu

    2012-02-01

    In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.

  9. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  10. On the convergence and accuracy of the FDTD method for nanoplasmonics.

    PubMed

    Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora

    2015-04-20

    Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large

  11. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  12. 76 FR 23713 - Wireless E911 Location Accuracy Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... Location Accuracy Requirements AGENCY: Federal Communications Commission. ACTION: Final rule; announcement... contained in regulations concerning wireless E911 location accuracy requirements. The information collection... standards for wireless Enhanced 911 (E911) Phase II location accuracy and reliability to satisfy these...

  13. Certified ion implantation fluence by high accuracy RBS.

    PubMed

    Colaux, Julien L; Jeynes, Chris; Heasman, Keith C; Gwilliam, Russell M

    2015-05-07

    From measurements over the last two years we have demonstrated that the charge collection system based on Faraday cups can robustly give near-1% absolute implantation fluence accuracy for our electrostatically scanned 200 kV Danfysik ion implanter, using four-point-probe mapping with a demonstrated accuracy of 2%, and accurate Rutherford backscattering spectrometry (RBS) of test implants from our quality assurance programme. The RBS is traceable to the certified reference material IRMM-ERM-EG001/BAM-L001, and involves convenient calibrations both of the electronic gain of the spectrometry system (at about 0.1% accuracy) and of the RBS beam energy (at 0.06% accuracy). We demonstrate that accurate RBS is a definitive method to determine quantity of material. It is therefore useful for certifying high quality reference standards, and is also extensible to other kinds of samples such as thin self-supporting films of pure elements. The more powerful technique of Total-IBA may inherit the accuracy of RBS.

  14. Analysis of spatial distribution of land cover maps accuracy

    NASA Astrophysics Data System (ADS)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  15. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  16. Increasing Deception Detection Accuracy with Strategic Questioning

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.

    2010-01-01

    One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…

  17. Empathic Embarrassment Accuracy in Autism Spectrum Disorder.

    PubMed

    Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G

    2015-06-01

    Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  18. Accuracy investigation of phthalate metabolite standards.

    PubMed

    Langlois, Éric; Leblanc, Alain; Simard, Yves; Thellen, Claude

    2012-05-01

    Phthalates are ubiquitous compounds whose metabolites are usually determined in urine for biomonitoring studies. Following suspect and unexplained results from our laboratory in an external quality-assessment scheme, we investigated the accuracy of all phthalate metabolite standards in our possession by comparing them with those of several suppliers. Our findings suggest that commercial phthalate metabolite certified solutions are not always accurate and that lot-to-lot discrepancies significantly affect the accuracy of the results obtained with several of these standards. These observations indicate that the reliability of the results obtained from different lots of standards is not equal, which reduces the possibility of intra-laboratory and inter-laboratory comparisons of results. However, agreements of accuracy have been observed for a majority of neat standards obtained from different suppliers, which indicates that a solution to this issue is available. Data accuracy of phthalate metabolites should be of concern for laboratories performing phthalate metabolite analysis because of the standards used. The results of our investigation are presented from the perspective that laboratories performing phthalate metabolite analysis can obtain accurate and comparable results in the future. Our findings will contribute to improving the quality of future phthalate metabolite analyses and will affect the interpretation of past results.

  19. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  20. Self-similar Theory of Wind-driven Sea

    NASA Astrophysics Data System (ADS)

    Zakharov, V. E.

    2015-12-01

    More than two dozens field experiments performed in the ocean and on the lakes show that the fetch-limited growth of dimensionless energy and dimensionless peak frequency is described by powerlike functions of the dimensionless fetch. Moreover, the exponents of these two functions are connected with a proper accuracy by the standard "magic relation", 10q-2p=1. Recent massive numerical experiments as far as experiments in wave tanks also confirm this magic relation. All these experimental facts can be interpreted in a framework of the following simple theory. The wind-driven sea is described by the "conservative" Hasselmann kinetic equation. The source terms, wind input and white-capping dissipation, play a secondary role in comparison with the nonlinear term Snl that is responsible for the four-wave resonant interaction. This equation has four-parameter family of self-similar solutions. The magic relation holds for all numbers of this family. This fact gives strong hope that development of self-consistent analytic theory of wind-driven sea is quite realizable task.

  1. The effect of signal to noise ratio on accuracy of temperature measurements for Brillouin lidar in water

    NASA Astrophysics Data System (ADS)

    Liang, Kun; Niu, Qunjie; Wu, Xiangkui; Xu, Jiaqi; Peng, Li; Zhou, Bo

    2017-09-01

    A lidar system with Fabry-Pérot etalon and an intensified charge coupled device can be used to obtain the scattering spectrum of the ocean and retrieve oceanic temperature profiles. However, the spectrum would be polluted by noise and result in a measurement error. To analyze the effect of signal to noise ratio (SNR) on the accuracy of measurements for Brillouin lidar in water, the theory model and characteristics of SNR are researched. The noise spectrums with different SNR are repetitiously measured based on simulation and experiment. The results show that accuracy is related to SNR, and considering the balance of time consumption and quality, the average of five measurements is adapted for real remote sensing under the pulse laser conditions of wavelength 532 nm, pulse energy 650 mJ, repetition rate 10 Hz, pulse width 8 ns and linewidth 0.003 cm-1 (90 MHz). Measuring with the Brillouin linewidth has a better accuracy at a lower temperature (<15 °C), while measuring with the Brillouin shift is a more appropriate method at a higher temperature (>15 °C), based on the classical retrieval model we adopt. The experimental results show that the temperature error is 0.71 °C and 0.06 °C based on shift and linewidth respectively when the image SNR is at the range of 3.2 dB-3.9 dB.

  2. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    PubMed

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  3. Indicators of Late Emerging Reading-Accuracy Difficulties in Australian Schools

    ERIC Educational Resources Information Center

    Galletly, Susan A.; Knight, Bruce Allen; Dekkers, John; Galletly, Tracey A.

    2009-01-01

    Late-emerging reading-accuracy difficulties are those found present in older students not showing reading-accuracy difficulties when tested in earlier years (Leach, Scarborough and Rescorla, 2003). This paper discusses the constructs of reading-accuracy and late-emerging reading-accuracy difficulties. It then discusses data from a cross-sectional…

  4. Structures and Techniques For Implementing and Packaging Complex, Large Scale Microelectromechanical Systems Using Foundry Fabrication Processes.

    DTIC Science & Technology

    1996-06-01

    switches 5-43 Figure 5-27. Mechanical interference between ’Pull Spring’ devices 5-45 Figure 5-28. Array of LIGA mechanical relay switches 5-49...like coating DM Direct metal interconnect technique DMD ™ Digital Micromirror Device EDP Ethylene, diamine, pyrocatechol and water; silicon anisotropic...mechanical systems MOSIS MOS Implementation Service PGA Pin grid array, an electronic die package PZT Lead-zirconate-titanate LIGA Lithographie

  5. Variance approximations for assessments of classification accuracy

    Treesearch

    R. L. Czaplewski

    1994-01-01

    Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...

  6. Building an Evaluation Scale using Item Response Theory.

    PubMed

    Lalor, John P; Wu, Hao; Yu, Hong

    2016-11-01

    Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.

  7. G3X-K theory: A composite theoretical method for thermochemical kinetics

    NASA Astrophysics Data System (ADS)

    da Silva, Gabriel

    2013-02-01

    A composite theoretical method for accurate thermochemical kinetics, G3X-K, is described. This method is accurate to around 0.5 kcal mol-1 for barrier heights and 0.8 kcal mol-1 for enthalpies of formation. G3X-K is a modification of G3SX theory using the M06-2X density functional for structures and zero-point energies and parameterized for a test set of 223 heats of formation and 23 barrier heights. A reduced perturbation-order variant, G3X(MP3)-K, is also developed, providing around 0.7 kcal mol-1 accuracy for barrier heights and 0.9 kcal mol-1 accuracy for enthalpies, at reduced computational cost. Some opportunities to further improve Gn composite methods are identified and briefly discussed.

  8. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  9. Throwing speed and accuracy in baseball and cricket players.

    PubMed

    Freeston, Jonathan; Rooney, Kieron

    2014-06-01

    Throwing speed and accuracy are both critical to sports performance but cannot be optimized simultaneously. This speed-accuracy trade-off (SATO) is evident across a number of throwing groups but remains poorly understood. The goal was to describe the SATO in baseball and cricket players and determine the speed that optimizes accuracy. 20 grade-level baseball and cricket players performed 10 throws at 80% and 100% of maximal throwing speed (MTS) toward a cricket stump. Baseball players then performed a further 10 throws at 70%, 80%, 90%, and 100% of MTS toward a circular target. Baseball players threw faster with greater accuracy than cricket players at both speeds. Both groups demonstrated a significant SATO as vertical error increased with increases in speed; the trade-off was worse for cricketers than baseball players. Accuracy was optimized at 70% of MTS for baseballers. Throwing athletes should decrease speed when accuracy is critical. Cricket players could adopt baseball-training practices to improve throwing performance.

  10. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  11. Sixth-order wave aberration theory of ultrawide-angle optical systems.

    PubMed

    Lu, Lijun; Cao, Yiqing

    2017-10-20

    In this paper, we develop sixth-order wave aberration theory of ultrawide-angle optical systems like fisheye lenses. Based on the concept and approach to develop wave aberration theory of plane-symmetric optical systems, we first derive the sixth-order intrinsic wave aberrations and the fifth-order ray aberrations; second, we present a method to calculate the pupil aberration of such kind of optical systems to develop the extrinsic aberrations; third, the relation of aperture-ray coordinates between adjacent optical surfaces is fitted with the second-order polynomial to improve the calculation accuracy of the wave aberrations of a fisheye lens with a large acceptance aperture. Finally, the resultant aberration expressions are applied to calculate the aberrations of two design examples of fisheye lenses; the calculation results are compared with the ray-tracing ones with Zemax software to validate the aberration expressions.

  12. Task-Based Variability in Children's Singing Accuracy

    ERIC Educational Resources Information Center

    Nichols, Bryan E.

    2013-01-01

    The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…

  13. Accuracy of Parent Identification of Stuttering Occurrence

    ERIC Educational Resources Information Center

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  14. Nanometer-scale sizing accuracy of particle suspensions on an unmodified cell phone using elastic light scattering.

    PubMed

    Smith, Zachary J; Chu, Kaiqin; Wachsmann-Hogiu, Sebastian

    2012-01-01

    We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.

  15. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  16. Reliability and accuracy of Crystaleye spectrophotometric system.

    PubMed

    Chen, Li; Tan, Jian Guo; Zhou, Jian Feng; Yang, Xu; Du, Yang; Wang, Fang Ping

    2010-01-01

    to develop an in vitro shade-measuring model to evaluate the reliability and accuracy of the Crystaleye spectrophotometric system, a newly developed spectrophotometer. four shade guides, VITA Classical, VITA 3D-Master, Chromascop and Vintage Halo NCC, were measured with the Crystaleye spectrophotometer in a standardised model, ten times for 107 shade tabs. The shade-matching results and the CIE L*a*b* values of the cervical, body and incisal regions for each measurement were automatically analysed using the supporting software. Reliability and accuracy were calculated for each shade tab both in percentage and in colour difference (ΔE). Difference was analysed by one-way ANOVA in the cervical, body and incisal regions. range of reliability was 88.81% to 98.97% and 0.13 to 0.24 ΔE units, and that of accuracy was 44.05% to 91.25% and 1.03 to 1.89 ΔE units. Significant differences in reliability and accuracy were found between the body region and the cervical and incisal regions. Comparisons made among regions and shade guides revealed that evaluation in ΔE was prone to disclose the differences. measurements with the Crystaleye spectrophotometer had similar, high reliability in different shade guides and regions, indicating predictable repeated measurements. Accuracy in the body region was high and less variable compared with the cervical and incisal regions.

  17. Social class, contextualism, and empathic accuracy.

    PubMed

    Kraus, Michael W; Côté, Stéphane; Keltner, Dacher

    2010-11-01

    Recent research suggests that lower-class individuals favor explanations of personal and political outcomes that are oriented to features of the external environment. We extended this work by testing the hypothesis that, as a result, individuals of a lower social class are more empathically accurate in judging the emotions of other people. In three studies, lower-class individuals (compared with upper-class individuals) received higher scores on a test of empathic accuracy (Study 1), judged the emotions of an interaction partner more accurately (Study 2), and made more accurate inferences about emotion from static images of muscle movements in the eyes (Study 3). Moreover, the association between social class and empathic accuracy was explained by the tendency for lower-class individuals to explain social events in terms of features of the external environment. The implications of class-based patterns in empathic accuracy for well-being and relationship outcomes are discussed.

  18. On various refined theories in the bending analysis of angle-ply laminates

    NASA Astrophysics Data System (ADS)

    Savithri, S.; Varadan, T. K.

    1992-05-01

    The accuracies of six shear-deformation theories are compared by analyzing the bending of angle-ply laminates and studying the results in the light of exact solutions. The shear-deformation theories used are those by: Ren (1986), Savithri and Varadan (1990), Bhaskar and Varadan (1991), Murakami (1986), and Pandya and Kant (1988), and combinations of these. The analytical methods are similar in that the number of unknown variables in the displacement field is independent of the number of layers in the laminate. The model by Ren is based on a parabolic distribution of transverse shear stresses in each laminate layer. This model is shown to give good predictions of deflections and stresses in two-layer antisymmetric and three-layer symmetric angle-ply laminates.

  19. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and other...

  20. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and other...

  1. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and other...

  2. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and other...

  3. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and other...

  4. The Accuracy of Gender Stereotypes Regarding Occupations.

    ERIC Educational Resources Information Center

    Beyer, Sylvia; Finnegan, Andrea

    Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…

  5. Accuracy of pulse oximetry in children.

    PubMed

    Ross, Patrick A; Newth, Christopher J L; Khemani, Robinder G

    2014-01-01

    For children with cyanotic congenital heart disease or acute hypoxemic respiratory failure, providers frequently make decisions based on pulse oximetry, in the absence of an arterial blood gas. The study objective was to measure the accuracy of pulse oximetry in the saturations from pulse oximetry (SpO2) range of 65% to 97%. This institutional review board-approved prospective, multicenter observational study in 5 PICUs included 225 mechanically ventilated children with an arterial catheter. With each arterial blood gas sample, SpO2 from pulse oximetry and arterial oxygen saturations from CO-oximetry (SaO2) were simultaneously obtained if the SpO2 was ≤ 97%. The lowest SpO2 obtained in the study was 65%. In the range of SpO2 65% to 97%, 1980 simultaneous values for SpO2 and SaO2 were obtained. The bias (SpO2 - SaO2) varied through the range of SpO2 values. The bias was greatest in the SpO2 range 81% to 85% (336 samples, median 6%, mean 6.6%, accuracy root mean squared 9.1%). SpO2 measurements were close to SaO2 in the SpO2 range 91% to 97% (901 samples, median 1%, mean 1.5%, accuracy root mean squared 4.2%). Previous studies on pulse oximeter accuracy in children present a single number for bias. This study identified that the accuracy of pulse oximetry varies significantly as a function of the SpO2 range. Saturations measured by pulse oximetry on average overestimate SaO2 from CO-oximetry in the SpO2 range of 76% to 90%. Better pulse oximetry algorithms are needed for accurate assessment of children with saturations in the hypoxemic range.

  6. Eyewitness decisions in simultaneous and sequential lineups: a dual-process signal detection theory analysis.

    PubMed

    Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H

    2005-07-01

    Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.

  7. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  8. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    NASA Astrophysics Data System (ADS)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  9. Accuracy of Protein Embedding Potentials: An Analysis in Terms of Electrostatic Potentials.

    PubMed

    Olsen, Jógvan Magnus Haugaard; List, Nanna Holmgaard; Kristensen, Kasper; Kongsted, Jacob

    2015-04-14

    Quantum-mechanical embedding methods have in recent years gained significant interest and may now be applied to predict a wide range of molecular properties calculated at different levels of theory. To reach a high level of accuracy in embedding methods, both the electronic structure model of the active region and the embedding potential need to be of sufficiently high quality. In fact, failures in quantum mechanics/molecular mechanics (QM/MM)-based embedding methods have often been associated with the QM/MM methodology itself; however, in many cases the reason for such failures is due to the use of an inaccurate embedding potential. In this paper, we investigate in detail the quality of the electronic component of embedding potentials designed for calculations on protein biostructures. We show that very accurate explicitly polarizable embedding potentials may be efficiently designed using fragmentation strategies combined with single-fragment ab initio calculations. In fact, due to the self-interaction error in Kohn-Sham density functional theory (KS-DFT), use of large full-structure quantum-mechanical calculations based on conventional (hybrid) functionals leads to less accurate embedding potentials than fragment-based approaches. We also find that standard protein force fields yield poor embedding potentials, and it is therefore not advisable to use such force fields in general QM/MM-type calculations of molecular properties other than energies and structures.

  10. An evaluation of the effectiveness of PROMPT therapy in improving speech production accuracy in six children with cerebral palsy.

    PubMed

    Ward, Roslyn; Leitão, Suze; Strauss, Geoff

    2014-08-01

    This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.

  11. Modeling method of time sequence model based grey system theory and application proceedings

    NASA Astrophysics Data System (ADS)

    Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang

    2015-12-01

    This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.

  12. Achieving DFT accuracy with a machine-learning interatomic potential: Thermomechanics and defects in bcc ferromagnetic iron

    NASA Astrophysics Data System (ADS)

    Dragoni, Daniele; Daff, Thomas D.; Csányi, Gábor; Marzari, Nicola

    2018-01-01

    We show that the Gaussian Approximation Potential (GAP) machine-learning framework can describe complex magnetic potential energy surfaces, taking ferromagnetic iron as a paradigmatic challenging case. The training database includes total energies, forces, and stresses obtained from density-functional theory in the generalized-gradient approximation, and comprises approximately 150,000 local atomic environments, ranging from pristine and defected bulk configurations to surfaces and generalized stacking faults with different crystallographic orientations. We find the structural, vibrational, and thermodynamic properties of the GAP model to be in excellent agreement with those obtained directly from first-principles electronic-structure calculations. There is good transferability to quantities, such as Peierls energy barriers, which are determined to a large extent by atomic configurations that were not part of the training set. We observe the benefit and the need of using highly converged electronic-structure calculations to sample a target potential energy surface. The end result is a systematically improvable potential that can achieve the same accuracy of density-functional theory calculations, but at a fraction of the computational cost.

  13. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy.

    PubMed

    Soong, Ming Foong; Ramli, Rahizar; Saifizul, Ahmad

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details.

  14. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy

    PubMed Central

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details. PMID:28617819

  15. Children's understanding of the immune system: Integrating the cognitive-developmental and intuitive theories' perspectives

    NASA Astrophysics Data System (ADS)

    Landry-Boozer, Kristine L.

    Traditional cognitive-developmental researchers have provided a large body of evidence supporting the stage-like progression of children's cognitive development. Further, from this body of research comes evidence that children's understanding of HIV/AIDS develops in much the same way as their understanding of other illness-related concepts. Researchers from a newer perspective assert that biological concepts develop from intuitive theories. In general, as children are exposed to relevant content and have opportunities to organize this information, their theories become more accurate and differentiated. According to this perspective, there are no broad structural constraints on developing concepts, as asserted by cognitive developmental theorists. The purpose of the current study was two-fold: to provide support for both theoretical perspectives, while at the same time to explore children's conceptualizations of the immune system, which has not been done previously in the cognitive-developmental literature. One hundred ninety children ranging in age from 4 years old through 11 years old, and a group of adults, participated. Each participant was interviewed regarding health concepts and the body's function in maintaining health. Participants were also asked to report if they had certain experiences that would have led to relevant content exposure. Qualitative analyses were utilized to code the interviews with rubrics based on both theoretical perspectives. Quantitative analyses consisted of a series of univariate ANOVAs (and post hoc tests when appropriate) examining all three coding variables (accuracy, differentiation, and developmental level) across various age-group combinations and exposure groups. Results of these analyses provided support for both theoretical perspectives. When the data were analyzed for developmental level by all ages, a stage-like progression consistent with Piagetian stages emerged. When accuracy and differentiation were examined (intuitive

  16. The effects of interstimulus interval on event-related indices of attention: an auditory selective attention test of perceptual load theory.

    PubMed

    Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter

    2008-03-01

    We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.

  17. Alcohol-impaired speed and accuracy of cognitive functions: a review of acute tolerance and recovery of cognitive performance.

    PubMed

    Schweizer, Tom A; Vogel-Sprott, Muriel

    2008-06-01

    Much research on the effects of a dose of alcohol has shown that motor skills recover from impairment as blood alcohol concentrations (BACs) decline and that acute tolerance to alcohol impairment can develop during the course of the dose. Comparable alcohol research on cognitive performance is sparse but has increased with the development of computerized cognitive tasks. This article reviews the results of recent research using these tasks to test the development of acute tolerance in cognitive performance and recovery from impairment during declining BACs. Results show that speed and accuracy do not necessarily agree in detecting cognitive impairment, and this mismatch most frequently occurs during declining BACs. Speed of cognitive performance usually recovers from impairment to drug-free levels during declining BACs, whereas alcohol-increased errors fail to diminish. As a consequence, speed of cognitive processing tends to develop acute tolerance, but no such tendency is shown in accuracy. This "acute protracted error" phenomenon has not previously been documented. The findings pose a challenge to the theory of alcohol tolerance on the basis of physiological adaptation and raise new research questions concerning the independence of speed and accuracy of cognitive processes, as well as hemispheric lateralization of alcohol effects. The occurrence of alcohol-induced protracted cognitive errors long after speed returned to normal is identified as a potential threat to the safety of social drinkers that requires urgent investigation.

  18. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  19. Daily modulation of the speed-accuracy trade-off.

    PubMed

    Gueugneau, Nicolas; Pozzo, Thierry; Darlot, Christian; Papaxanthis, Charalambos

    2017-07-25

    Goal-oriented arm movements are characterized by a balance between speed and accuracy. The relation between speed and accuracy has been formalized by Fitts' law and predicts a linear increase in movement duration with task constraints. Up to now this relation has been investigated on a short-time scale only, that is during a single experimental session, although chronobiological studies report that the motor system is shaped by circadian rhythms. Here, we examine whether the speed-accuracy trade-off could vary during the day. Healthy adults carried out arm-pointing movements as accurately and fast as possible toward targets of different sizes at various hours of the day, and variations in Fitts' law parameters were scrutinized. To investigate whether the potential modulation of the speed-accuracy trade-off has peripheral and/or central origins, a motor imagery paradigm was used as well. Results indicated a daily (circadian-like) variation for the durations of both executed and mentally simulated movements, in strictly controlled accuracy conditions. While Fitts' law was held for the whole sessions of the day, the slope of the relation between movement duration and task difficulty expressed a clear modulation, with the lowest values in the afternoon. This variation of the speed-accuracy trade-off in executed and mental movements suggests that, beyond execution parameters, motor planning mechanisms are modulated during the day. Daily update of forward models is discussed as a potential mechanism. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Haptic perception accuracy depending on self-produced movement.

    PubMed

    Park, Chulwook; Kim, Seonjin

    2014-01-01

    This study measured whether self-produced movement influences haptic perception ability (experiment 1) as well as the factors associated with levels of influence (experiment 2) in racket sports. For experiment 1, the haptic perception accuracy levels of five male table tennis experts and five male novices were examined under two different conditions (no movement vs. movement). For experiment 2, the haptic afferent subsystems of five male table tennis experts and five male novices were investigated in only the self-produced movement-coupled condition. Inferential statistics (ANOVA, t-test) and custom-made devices (shock & vibration sensor, Qualisys Track Manager) of the data were used to determine the haptic perception accuracy (experiment 1, experiment 2) and its association with expertise. The results of this research show that expert-level players acquire higher accuracy with less variability (racket vibration and angle) than novice-level players, especially in their self-produced movement coupled performances. The important finding from this result is that, in terms of accuracy, the skill-associated differences were enlarged during self-produced movement. To explain the origin of this difference between experts and novices, the functional variability of haptic afferent subsystems can serve as a reference. These two factors (self-produced accuracy and the variability of haptic features) as investigated in this study would be useful criteria for educators in racket sports and suggest a broader hypothesis for further research into the effects of the haptic accuracy related to variability.

  1. Re-examination of "release-from-PI" phenomena: recall accuracy does not recover after a semantic switch.

    PubMed

    Hubbard, Nicholas A; Weaver, Travis P; Turner, Monroe P; Rypma, Bart

    2018-01-29

    Recall accuracy decreases over successive memory trials using similar memoranda. This effect reflects proactive interference (PI) - the tendency for previously studied information to reduce recall of new information. However, recall improves if memoranda for a subsequent trial are semantically dissimilar from the previous trials. This improvement is thought to reflect a release from PI. We tested whether PI is reduced or released from the semantic category for which it had been induced by employing paradigms which featured inducement, semantic switch, and then return-to-original category epochs. Two experiments confirmed that PI was not released after various semantic switch trials (effects from d = -0.93 to -1.6). Combined analyses from both studies demonstrated that the number of intervening new category trials did not reduce or release PI. In fact, in all conditions recall accuracy decreased, demonstrating that PI is maintained and can increase after the new category trials. The release-from-PI account cannot accommodate these broader dynamics of PI. This account is also incongruent with evidence and theory from cognitive psychology, linguistics, and neuroscience. We propose a reintroduction-of-PI account which explains these broader PI dynamics and is consistent with the wider psychological and neurosciences.

  2. Accuracy of references and quotations in veterinary journals.

    PubMed

    Hinchcliff, K W; Bruce, N J; Powers, J D; Kipp, M L

    1993-02-01

    The accuracy of references and quotations used to substantiate statements of fact in articles published in 6 frequently cited veterinary journals was examined. Three hundred references were randomly selected, and the accuracy of each citation was examined. A subset of 100 references was examined for quotational accuracy; ie, the accuracy with which authors represented the work or assertions of the author being cited. Of the 300 references selected, 295 were located, and 125 major errors were found in 88 (29.8%) of them. Sixty-seven (53.6%) major errors were found involving authors, 12 (9.6%) involved the article title, 14 (11.2%) involved the book or journal title, and 32 (25.6%) involved the volume number, date, or page numbers. Sixty-eight minor errors were detected. The accuracy of 111 quotations from 95 citations in 65 articles was examined. Nine quotations were technical and not classified, 86 (84.3%) were classified as correct, 2 (1.9%) contained minor misquotations, and 14 (13.7%) contained major misquotations. We concluded that misquotations and errors in citations occur frequently in veterinary journals, but at a rate similar to that reported for other biomedical journals.

  3. Theory and Circuit Model for Lossy Coaxial Transmission Line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genoni, T. C.; Anderson, C. N.; Clark, R. E.

    2017-04-01

    The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.

  4. From square-well to Janus: Improved algorithm for integral equation theory and comparison with thermodynamic perturbation theory within the Kern-Frenkel model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giacometti, Achille, E-mail: achille.giacometti@unive.it; Gögelein, Christoph, E-mail: christoph.goegelein@ds.mpg.de; Lado, Fred, E-mail: lado@ncsu.edu

    2014-03-07

    Building upon past work on the phase diagram of Janus fluids [F. Sciortino, A. Giacometti, and G. Pastore, Phys. Rev. Lett. 103, 237801 (2009)], we perform a detailed study of integral equation theory of the Kern-Frenkel potential with coverage that is tuned from the isotropic square-well fluid to the Janus limit. An improved algorithm for the reference hypernetted-chain (RHNC) equation for this problem is implemented that significantly extends the range of applicability of RHNC. Results for both structure and thermodynamics are presented and compared with numerical simulations. Unlike previous attempts, this algorithm is shown to be stable down to themore » Janus limit, thus paving the way for analyzing the frustration mechanism characteristic of the gas-liquid transition in the Janus system. The results are also compared with Barker-Henderson thermodynamic perturbation theory on the same model. We then discuss the pros and cons of both approaches within a unified treatment. On balance, RHNC integral equation theory, even with an isotropic hard-sphere reference system, is found to be a good compromise between accuracy of the results, computational effort, and uniform quality to tackle self-assembly processes in patchy colloids of complex nature. Further improvement in RHNC however clearly requires an anisotropic reference bridge function.« less

  5. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  6. Configurational forces in electronic structure calculations using Kohn-Sham density functional theory

    NASA Astrophysics Data System (ADS)

    Motamarri, Phani; Gavini, Vikram

    2018-04-01

    We derive the expressions for configurational forces in Kohn-Sham density functional theory, which correspond to the generalized variational force computed as the derivative of the Kohn-Sham energy functional with respect to the position of a material point x . These configurational forces that result from the inner variations of the Kohn-Sham energy functional provide a unified framework to compute atomic forces as well as stress tensor for geometry optimization. Importantly, owing to the variational nature of the formulation, these configurational forces inherently account for the Pulay corrections. The formulation presented in this work treats both pseudopotential and all-electron calculations in a single framework, and employs a local variational real-space formulation of Kohn-Sham density functional theory (DFT) expressed in terms of the nonorthogonal wave functions that is amenable to reduced-order scaling techniques. We demonstrate the accuracy and performance of the proposed configurational force approach on benchmark all-electron and pseudopotential calculations conducted using higher-order finite-element discretization. To this end, we examine the rates of convergence of the finite-element discretization in the computed forces and stresses for various materials systems, and, further, verify the accuracy from finite differencing the energy. Wherever applicable, we also compare the forces and stresses with those obtained from Kohn-Sham DFT calculations employing plane-wave basis (pseudopotential calculations) and Gaussian basis (all-electron calculations). Finally, we verify the accuracy of the forces on large materials systems involving a metallic aluminum nanocluster containing 666 atoms and an alkane chain containing 902 atoms, where the Kohn-Sham electronic ground state is computed using a reduced-order scaling subspace projection technique [P. Motamarri and V. Gavini, Phys. Rev. B 90, 115127 (2014), 10.1103/PhysRevB.90.115127].

  7. Theory and experiments in model-based space system anomaly management

    NASA Astrophysics Data System (ADS)

    Kitts, Christopher Adam

    This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.

  8. Does filler database size influence identification accuracy?

    PubMed

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. The theory of granular packings for coarse soils

    NASA Astrophysics Data System (ADS)

    Yanqui, Calixtro

    2013-06-01

    Coarse soils are substances made of grains of different shape, size and orientation. In this paper, new massive-measurable grain indexes are defined to develop a simple and systematic theory for the ideal packing of grains. First, a linear relationship between an assemblage of monodisperse spheres and an assemblage of polydisperse grains is deduced. Then, a general formula for the porosity of linearly ordered packings of spheres in contact is settled down by the appropriated choosing of eight neighboring spheres located at the vertices of the unit parallelepiped. The porosity of axisymmetric packings of grains, related to sand piles and axisymmetric compression tests, is proposed to be determined averaging the respective linear parameters. Since they can be tested experimentally, porosities of the densest state and the loosest state of a granular soil can be used to verify the accuracy of the present theory. Diagrams for these extreme quantities show a good agreement between the theoretical lines and the experimental data, no matter the dependency on the protocols and mineral composition.

  10. Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.

    2017-03-01

    We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.

  11. The accuracy of general practitioner workforce projections

    PubMed Central

    2013-01-01

    Background Health workforce projections are important instruments to prevent imbalances in the health workforce. For both the tenability and further development of these projections, it is important to evaluate the accuracy of workforce projections. In the Netherlands, health workforce projections have been done since 2000 to support health workforce planning. What is the accuracy of the techniques of these Dutch general practitioner workforce projections? Methods We backtested the workforce projection model by comparing the ex-post projected number of general practitioners with the observed number of general practitioners between 1998 and 2011. Averages of historical data were used for all elements except for inflow in training. As the required training inflow is the key result of the workforce planning model, and has actually determined past adjustments of training inflow, the accuracy of the model was backtested using the observed training inflow and not an average of historical data to avoid the interference of past policy decisions. The accuracy of projections with different lengths of projection horizon and base period (on which the projections are based) was tested. Results The workforce projection model underestimated the number of active Dutch general practitioners in most years. The mean absolute percentage errors range from 1.9% to 14.9%, with the projections being more accurate in more recent years. Furthermore, projections with a shorter projection horizon have a higher accuracy than those with a longer horizon. Unexpectedly, projections with a shorter base period have a higher accuracy than those with a longer base period. Conclusions According to the results of the present study, forecasting the size of the future workforce did not become more difficult between 1998 and 2011, as we originally expected. Furthermore, the projections with a short projection horizon and a short base period are more accurate than projections with a longer projection

  12. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  13. A Game Theory Based Solution for Security Challenges in CRNs

    NASA Astrophysics Data System (ADS)

    Poonam; Nagpal, Chander Kumar

    2018-03-01

    Cognitive radio networks (CRNs) are being envisioned to drive the next generation Ad hoc wireless networks due to their ability to provide communications resilience in continuously changing environments through the use of dynamic spectrum access. Conventionally CRNs are dependent upon the information gathered by other secondary users to ensure the accuracy of spectrum sensing making them vulnerable to security attacks leading to the need of security mechanisms like cryptography and trust. However, a typical cryptography based solution is not a viable security solution for CRNs owing to their limited resources. Effectiveness of trust based approaches has always been, in question, due to credibility of secondary trust resources. Game theory with its ability to optimize in an environment of conflicting interests can be quite a suitable tool to manage an ad hoc network in the presence of autonomous selfish/malevolent/malicious and attacker nodes. The literature contains several theoretical proposals for augmenting game theory in the ad hoc networks without explicit/detailed implementation. This paper implements a game theory based solution in MATLAB-2015 to secure the CRN environment and compares the obtained results with the traditional approaches of trust and cryptography. The simulation result indicates that as the time progresses the game theory performs much better with higher throughput, lower jitter and better identification of selfish/malicious nodes.

  14. You never think about my feelings: interpersonal dominance as a predictor of emotion decoding accuracy.

    PubMed

    Moeller, Sara K; Lee, Elizabeth A Ewing; Robinson, Michael D

    2011-08-01

    Dominance and submission constitute fundamentally different social interaction strategies that may be enacted most effectively to the extent that the emotions of others are relatively ignored (dominance) versus noticed (submission). On the basis of such considerations, we hypothesized a systematic relationship between chronic tendencies toward high versus low levels of interpersonal dominance and emotion decoding accuracy in objective tasks. In two studies (total N = 232), interpersonally dominant individuals exhibited poorer levels of emotion recognition in response to audio and video clips (Study 1) and facial expressions of emotion (Study 2). The results provide a novel perspective on interpersonal dominance, suggest its strategic nature (Study 2), and are discussed in relation to Fiske's (1993) social-cognitive theory of power. 2011 APA, all rights reserved

  15. Application of renormalization group theory to the large-eddy simulation of transitional boundary layers

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.

    1990-01-01

    An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.

  16. Accuracy of endoscopic ultrasonography for diagnosing ulcerative early gastric cancers

    PubMed Central

    Park, Jin-Seok; Kim, Hyungkil; Bang, Byongwook; Kwon, Kyesook; Shin, Youngwoon

    2016-01-01

    Abstract Although endoscopic ultrasonography (EUS) is the first-choice imaging modality for predicting the invasion depth of early gastric cancer (EGC), the prediction accuracy of EUS is significantly decreased when EGC is combined with ulceration. The aim of present study was to compare the accuracy of EUS and conventional endoscopy (CE) for determining the depth of EGC. In addition, the various clinic-pathologic factors affecting the diagnostic accuracy of EUS, with a particular focus on endoscopic ulcer shapes, were evaluated. We retrospectively reviewed data from 236 consecutive patients with ulcerative EGC. All patients underwent EUS for estimating tumor invasion depth, followed by either curative surgery or endoscopic treatment. The diagnostic accuracy of EUS and CE was evaluated by comparing the final histologic result of resected specimen. The correlation between accuracy of EUS and characteristics of EGC (tumor size, histology, location in stomach, tumor invasion depth, and endoscopic ulcer shapes) was analyzed. Endoscopic ulcer shapes were classified into 3 groups: definite ulcer, superficial ulcer, and ill-defined ulcer. The overall accuracy of EUS and CE for predicting the invasion depth in ulcerative EGC was 68.6% and 55.5%, respectively. Of the 236 patients, 36 patients were classified as definite ulcers, 98 were superficial ulcers, and 102 were ill-defined ulcers, In univariate analysis, EUS accuracy was associated with invasion depth (P = 0.023), tumor size (P = 0.034), and endoscopic ulcer shapes (P = 0.001). In multivariate analysis, there is a significant association between superficial ulcer in CE and EUS accuracy (odds ratio: 2.977; 95% confidence interval: 1.255–7.064; P = 0.013). The accuracy of EUS for determining tumor invasion depth in ulcerative EGC was superior to that of CE. In addition, ulcer shape was an important factor that affected EUS accuracy. PMID:27472672

  17. Accuracy optimization with wavelength tunability in overlay imaging technology

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna

    2018-03-01

    As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.

  18. Accuracy of parameterized proton range models; A comparison

    NASA Astrophysics Data System (ADS)

    Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.

    2018-03-01

    An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.

  19. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  20. Evidence for Enhanced Interoceptive Accuracy in Professional Musicians

    PubMed Central

    Schirmer-Mokwa, Katharina L.; Fard, Pouyan R.; Zamorano, Anna M.; Finkel, Sebastian; Birbaumer, Niels; Kleber, Boris A.

    2015-01-01

    Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect. PMID:26733836

  1. English Verb Accuracy of Bilingual Cantonese-English Preschoolers.

    PubMed

    Rezzonico, Stefano; Goldberg, Ahuva; Milburn, Trelani; Belletti, Adriana; Girolametto, Luigi

    2017-07-26

    Knowledge of verb development in typically developing bilingual preschoolers may inform clinicians about verb accuracy rates during the 1st 2 years of English instruction. This study aimed to investigate tensed verb accuracy in 2 assessment contexts in 4- and 5-year-old Cantonese-English bilingual preschoolers. The sample included 47 Cantonese-English bilinguals enrolled in English preschools. Half of the children were in their 1st 4 months of English language exposure, and half had completed 1 year and 4 months of exposure to English. Data were obtained from the Test of Early Grammatical Impairment (Rice & Wexler, 2001) and from a narrative generated in English. By the 2nd year of formal exposure to English, children in the present study approximated 33% accuracy of tensed verbs in a formal testing context versus 61% in a narrative context. The use of the English verb BE approximated mastery. Predictors of English third-person singular verb accuracy were task, grade, English expressive vocabulary, and lemma frequency. Verb tense accuracy was low across both groups, but a precocious mastery of BE was observed. The results of the present study suggest that speech-language pathologists may consider, in addition to an elicitation task, evaluating the use of verbs during narratives in bilingual Cantonese-English bilingual children.

  2. Reliability and accuracy of four dental shade-matching devices.

    PubMed

    Kim-Pusateri, Seungyee; Brewer, Jane D; Davis, Elaine L; Wee, Alvin G

    2009-03-01

    There are several electronic shade-matching instruments available for clinical use, but the reliability and accuracy of these instruments have not been thoroughly investigated. The purpose of this in vitro study was to evaluate the reliability and accuracy of 4 dental shade-matching instruments in a standardized environment. Four shade-matching devices were tested: SpectroShade, ShadeVision, VITA Easyshade, and ShadeScan. Color measurements were made of 3 commercial shade guides (Vitapan Classical, Vitapan 3D-Master, and Chromascop). Shade tabs were placed in the middle of a gingival matrix (Shofu GUMY) with shade tabs of the same nominal shade from additional shade guides placed on both sides. Measurements were made of the central region of the shade tab positioned inside a black box. For the reliability assessment, each shade tab from each of the 3 shade guide types was measured 10 times. For the accuracy assessment, each shade tab from 10 guides of each of the 3 types evaluated was measured once. Differences in reliability and accuracy were evaluated using the Standard Normal z test (2 sided) (alpha=.05) with Bonferroni correction. Reliability of devices was as follows: ShadeVision, 99.0%; SpectroShade, 96.9%; VITA Easyshade, 96.4%; and ShadeScan, 87.4%. A significant difference in reliability was found between ShadeVision and ShadeScan (P=.008). All other comparisons showed similar reliability. Accuracy of devices was as follows: VITA Easyshade, 92.6%; ShadeVision, 84.8%; SpectroShade, 80.2%; and ShadeScan, 66.8%. Significant differences in accuracy were found between all device pairs (P<.001) for all comparisons except for SpectroShade versus ShadeVision (P=.033). Most devices had similar high reliability (over 96%), indicating predictable shade values from repeated measurements. However, there was more variability in accuracy among devices (67-93%), and differences in accuracy were seen with most device comparisons.

  3. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  4. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  5. Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method.

    PubMed

    Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A

    2018-02-01

    To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  7. Active corrosion protection of AA2024 by sol-gel coatings with corrosion inhibitors =

    NASA Astrophysics Data System (ADS)

    Yasakau, Kiryl

    A industria aeronautica utiliza ligas de aluminio de alta resistencia para o fabrico dos elementos estruturais dos avioes. As ligas usadas possuem excelentes propriedades mecanicas mas apresentam simultaneamente uma grande tendencia para a corrosao. Por esta razao essas ligas necessitam de proteccao anticorrosiva eficaz para poderem ser utilizadas com seguranca. Ate a data, os sistemas anticorrosivos mais eficazes para ligas de aluminio contem cromio hexavalente na sua composicao, sejam pre-tratamentos, camadas de conversao ou pigmentos anticorrosivos. O reconhecimento dos efeitos carcinogenicos do cromio hexavalente levou ao aparecimento de legislacao banindo o uso desta forma de cromio pela industria. Esta decisao trouxe a necessidade de encontrar alternativas ambientalmente inocuas mas igualmente eficazes. O principal objectivo do presente trabalho e o desenvolvimento de pretratamentos anticorrosivos activos para a liga de aluminio 2024, baseados em revestimentos hibridos produzidos pelo metodo sol-gel. Estes revestimentos deverao possuir boa aderencia ao substrato metalico, boas propriedades barreira e capacidade anticorrosiva activa. A proteccao activa pode ser alcancada atraves da incorporacao de inibidores anticorrosivos no pretratamento. O objectivo foi atingido atraves de uma sucessao de etapas. Primeiro investigou-se em detalhe a corrosao localizada (por picada) da liga de aluminio 2024. Os resultados obtidos permitiram uma melhor compreensao da susceptibilidade desta liga a processos de corrosao localizada. Estudaram-se tambem varios possiveis inibidores de corrosao usando tecnicas electroquimicas e microestruturais. Numa segunda etapa desenvolveram-se revestimentos anticorrosivos hibridos organico-inorganico baseados no metodo sol-gel. Compostos derivados de titania e zirconia foram combinados com siloxanos organofuncionais a fim de obter-se boa aderencia entre o revestimento e o substrato metalico assim como boas propriedades barreira. Testes

  8. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  9. Compassion meditation enhances empathic accuracy and related neural activity

    PubMed Central

    Mascaro, Jennifer S.; Rilling, James K.; Tenzin Negi, Lobsang; Raison, Charles L.

    2013-01-01

    The ability to accurately infer others’ mental states from facial expressions is important for optimal social functioning and is fundamentally impaired in social cognitive disorders such as autism. While pharmacologic interventions have shown promise for enhancing empathic accuracy, little is known about the effects of behavioral interventions on empathic accuracy and related brain activity. This study employed a randomized, controlled and longitudinal design to investigate the effect of a secularized analytical compassion meditation program, cognitive-based compassion training (CBCT), on empathic accuracy. Twenty-one healthy participants received functional MRI scans while completing an empathic accuracy task, the Reading the Mind in the Eyes Test (RMET), both prior to and after completion of either CBCT or a health discussion control group. Upon completion of the study interventions, participants randomized to CBCT and were significantly more likely than control subjects to have increased scores on the RMET and increased neural activity in the inferior frontal gyrus (IFG) and dorsomedial prefrontal cortex (dmPFC). Moreover, changes in dmPFC and IFG activity from baseline to the post-intervention assessment were associated with changes in empathic accuracy. These findings suggest that CBCT may hold promise as a behavioral intervention for enhancing empathic accuracy and the neurobiology supporting it. PMID:22956676

  10. On the accuracy of ERS-1 orbit predictions

    NASA Technical Reports Server (NTRS)

    Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.

    1993-01-01

    Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.

  11. Understanding the delayed-keyword effect on metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  12. The Influence of Motor Skills on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Brychta, Petr; Sadílek, Marek; Brychta, Josef

    2016-10-01

    This innovative study trying to do interdisciplinary interface at first view different ways fields: kinantropology and mechanical engineering. A motor skill is described as an action which involves the movement of muscles in a body. Gross motor skills permit functions as a running, jumping, walking, punching, lifting and throwing a ball, maintaining a body balance, coordinating etc. Fine motor skills captures smaller neuromuscular actions, such as holding an object between the thumb and a finger. In mechanical inspection, the accuracy of measurement is most important aspect. The accuracy of measurement to some extent is also dependent upon the sense of sight or sense of touch associated with fine motor skills. It is therefore clear that the level of motor skills will affect the precision and accuracy of measurement in metrology. Aim of this study is literature review to find out fine motor skills level of individuals and determine the potential effect of different fine motor skill performance on precision and accuracy of mechanical engineering measuring.

  13. Small Body Landing Accuracy Using In-Situ Navigation

    NASA Technical Reports Server (NTRS)

    Bhaskaran, Shyam; Nandi, Sumita; Broschart, Stephen; Wallace, Mark; Olson, Corwin; Cangahuala, L. Alberto

    2011-01-01

    Spacecraft landings on small bodies (asteroids and comets) can require target accuracies too stringent to be met using ground-based navigation alone, especially if specific landing site requirements must be met for safety or to meet science goals. In-situ optical observations coupled with onboard navigation processing can meet the tighter accuracy requirements to enable such missions. Recent developments in deep space navigation capability include a self-contained autonomous navigation system (used in flight on three missions) and a landmark tracking system (used experimentally on the Japanese Hayabusa mission). The merging of these two technologies forms a methodology to perform autonomous onboard navigation around small bodies. This paper presents an overview of these systems, as well as the results from Monte Carlo studies to quantify the achievable landing accuracies by using these methods. Sensitivity of the results to variations in spacecraft maneuver execution error, attitude control accuracy and unmodeled forces are examined. Cases for two bodies, a small asteroid and on a mid-size comet, are presented.

  14. Building an Evaluation Scale using Item Response Theory

    PubMed Central

    Lalor, John P.; Wu, Hao; Yu, Hong

    2016-01-01

    Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.1 PMID:28004039

  15. Matters of Accuracy and Conventionality: Prior Accuracy Guides Children's Evaluations of Others' Actions

    ERIC Educational Resources Information Center

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-01-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…

  16. Rotor Performance at High Advance Ratio: Theory versus Test

    NASA Technical Reports Server (NTRS)

    Harris, Franklin D.

    2008-01-01

    Five analytical tools have been used to study rotor performance at high advance ratio. One is representative of autogyro rotor theory in 1934 and four are representative of helicopter rotor theory in 2008. The five theories are measured against three sets of well documented, full-scale, isolated rotor performance experiments. The major finding of this study is that the decades spent by many rotorcraft theoreticians to improve prediction of basic rotor aerodynamic performance has paid off. This payoff, illustrated by comparing the CAMRAD II comprehensive code and Wheatley & Bailey theory to H-34 test data, shows that rational rotor lift to drag ratios are now predictable. The 1934 theory predicted L/D ratios as high as 15. CAMRAD II predictions compared well with H-34 test data having L/D ratios more on the order of 7 to 9. However, the detailed examination of the selected codes compared to H-34 test data indicates that not one of the codes can predict to engineering accuracy above an advance ratio of 0.62 the control positions and shaft angle of attack required for a given lift. There is no full-scale rotor performance data available for advance ratios above 1.0 and extrapolation of currently available data to advance ratios on the order of 2.0 is unreasonable despite the needs of future rotorcraft. Therefore, it is recommended that an overly strong full-scale rotor blade set be obtained and tested in a suitable wind tunnel to at least an advance ratio of 2.5. A tail rotor from a Sikorsky CH-53 or other large single rotor helicopter should be adequate for this exploratory experiment.

  17. Final Technical Report: Increasing Prediction Accuracy.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  18. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; hide

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  19. Photon caliper to achieve submillimeter positioning accuracy

    NASA Astrophysics Data System (ADS)

    Gallagher, Kyle J.; Wong, Jennifer; Zhang, Junan

    2017-09-01

    The purpose of this study was to demonstrate the feasibility of using a commercial two-dimensional (2D) detector array with an inherent detector spacing of 5 mm to achieve submillimeter accuracy in localizing the radiation isocenter. This was accomplished by delivering the Vernier ‘dose’ caliper to a 2D detector array where the nominal scale was the 2D detector array and the non-nominal Vernier scale was the radiation dose strips produced by the high-definition (HD) multileaf collimators (MLCs) of the linear accelerator. Because the HD MLC sequence was similar to the picket fence test, we called this procedure the Vernier picket fence (VPF) test. We confirmed the accuracy of the VPF test by offsetting the HD MLC bank by known increments and comparing the known offset with the VPF test result. The VPF test was able to determine the known offset within 0.02 mm. We also cross-validated the accuracy of the VPF test in an evaluation of couch hysteresis. This was done by using both the VPF test and the ExacTrac optical tracking system to evaluate the couch position. We showed that the VPF test was in agreement with the ExacTrac optical tracking system within a root-mean-square value of 0.07 mm for both the lateral and longitudinal directions. In conclusion, we demonstrated the VPF test can determine the offset between a 2D detector array and the radiation isocenter with submillimeter accuracy. Until now, no method to locate the radiation isocenter using a 2D detector array has been able to achieve such accuracy.

  20. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  1. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  2. Neural correlates of empathic accuracy in adolescence

    PubMed Central

    Kral, Tammi R A; Solis, Enrique; Mumford, Jeanette A; Schuyler, Brianna S; Flook, Lisa; Rifken, Katharine; Patsenko, Elena G

    2017-01-01

    Abstract Empathy, the ability to understand others’ emotions, can occur through perspective taking and experience sharing. Neural systems active when adults empathize include regions underlying perspective taking (e.g. medial prefrontal cortex; MPFC) and experience sharing (e.g. inferior parietal lobule; IPL). It is unknown whether adolescents utilize networks implicated in both experience sharing and perspective taking when accurately empathizing. This question is critical given the importance of accurately understanding others’ emotions for developing and maintaining adaptive peer relationships during adolescence. We extend the literature on empathy in adolescence by determining the neural basis of empathic accuracy, a behavioral assay of empathy that does not bias participants toward the exclusive use of perspective taking or experience sharing. Participants (N = 155, aged 11.1–15.5 years) watched videos of ‘targets’ describing emotional events and continuously rated the targets’ emotions during functional magnetic resonance imaging scanning. Empathic accuracy related to activation in regions underlying perspective taking (MPFC, temporoparietal junction and superior temporal sulcus), while activation in regions underlying experience sharing (IPL, anterior cingulate cortex and anterior insula) related to lower empathic accuracy. These results provide novel insight into the neural basis of empathic accuracy in adolescence and suggest that perspective taking processes may be effective for increasing empathy. PMID:28981837

  3. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Illusory expectations can affect retrieval-monitoring accuracy.

    PubMed

    McDonough, Ian M; Gallo, David A

    2012-03-01

    The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved

  5. Exchange-correlation approximations for reduced-density-matrix-functional theory at finite temperature: Capturing magnetic phase transitions in the homogeneous electron gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldsiefen, Tim; Cangi, Attila; Eich, F. G.

    Here, we derive an intrinsically temperature-dependent approximation to the correlation grand potential for many-electron systems in thermodynamical equilibrium in the context of finite-temperature reduced-density-matrix-functional theory (FT-RDMFT). We demonstrate its accuracy by calculating the magnetic phase diagram of the homogeneous electron gas. We compare it to known limits from highly accurate quantum Monte Carlo calculations as well as to phase diagrams obtained within existing exchange-correlation approximations from density functional theory and zero-temperature RDMFT.

  6. Exchange-correlation approximations for reduced-density-matrix-functional theory at finite temperature: Capturing magnetic phase transitions in the homogeneous electron gas

    DOE PAGES

    Baldsiefen, Tim; Cangi, Attila; Eich, F. G.; ...

    2017-12-18

    Here, we derive an intrinsically temperature-dependent approximation to the correlation grand potential for many-electron systems in thermodynamical equilibrium in the context of finite-temperature reduced-density-matrix-functional theory (FT-RDMFT). We demonstrate its accuracy by calculating the magnetic phase diagram of the homogeneous electron gas. We compare it to known limits from highly accurate quantum Monte Carlo calculations as well as to phase diagrams obtained within existing exchange-correlation approximations from density functional theory and zero-temperature RDMFT.

  7. Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)

    USGS Publications Warehouse

    Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne

    2017-01-01

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other

  8. Rational Density Functional Selection Using Game Theory.

    PubMed

    McAnanama-Brereton, Suzanne; Waller, Mark P

    2018-01-22

    Theoretical chemistry has a paradox of choice due to the availability of a myriad of density functionals and basis sets. Traditionally, a particular density functional is chosen on the basis of the level of user expertise (i.e., subjective experiences). Herein we circumvent the user-centric selection procedure by describing a novel approach for objectively selecting a particular functional for a given application. We achieve this by employing game theory to identify optimal functional/basis set combinations. A three-player (accuracy, complexity, and similarity) game is devised, through which Nash equilibrium solutions can be obtained. This approach has the advantage that results can be systematically improved by enlarging the underlying knowledge base, and the deterministic selection procedure mathematically justifies the density functional and basis set selections.

  9. Optimum free energy in the reference functional approach for the integral equations theory

    NASA Astrophysics Data System (ADS)

    Ayadim, A.; Oettel, M.; Amokrane, S.

    2009-03-01

    We investigate the question of determining the bulk properties of liquids, required as input for practical applications of the density functional theory of inhomogeneous systems, using density functional theory itself. By considering the reference functional approach in the test particle limit, we derive an expression of the bulk free energy that is consistent with the closure of the Ornstein-Zernike equations in which the bridge functions are obtained from the reference system bridge functional. By examining the connection between the free energy functional and the formally exact bulk free energy, we obtain an improved expression of the corresponding non-local term in the standard reference hypernetted chain theory derived by Lado. In this way, we also clarify the meaning of the recently proposed criterion for determining the optimum hard-sphere diameter in the reference system. This leads to a theory in which the sole input is the reference system bridge functional both for the homogeneous system and the inhomogeneous one. The accuracy of this method is illustrated with the standard case of the Lennard-Jones fluid and with a Yukawa fluid with very short range attraction.

  10. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    NASA Astrophysics Data System (ADS)

    Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo

    2010-10-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.

  11. Advanced multilateration theory, software development, and data processing: The MICRODOT system

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.

    1976-01-01

    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.

  12. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  13. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  14. Modeling of layered anisotropic composite material based on effective medium theory

    NASA Astrophysics Data System (ADS)

    Bao, Yang; Song, Jiming

    2018-04-01

    In this paper, we present an efficient method to simulate multilayered anisotropic composite material with effective medium theory. Effective permittivity, permeability and orientation angle for a layered anisotropic composite medium are extracted with this equivalent model. We also derive analytical expressions for effective parameters and orientation angle with low frequency (LF) limit, which will be shown in detail. Numerical results are shown in comparing extracted effective parameters and orientation angle with analytical results from low frequency limit. Good agreements are achieved to demonstrate the accuracy of our efficient model.

  15. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  16. Matter power spectrum and the challenge of percent accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less

  17. Dual processing theory and experts' reasoning: exploring thinking on national multiple-choice questions.

    PubMed

    Durning, Steven J; Dong, Ting; Artino, Anthony R; van der Vleuten, Cees; Holmboe, Eric; Schuwirth, Lambert

    2015-08-01

    An ongoing debate exists in the medical education literature regarding the potential benefits of pattern recognition (non-analytic reasoning), actively comparing and contrasting diagnostic options (analytic reasoning) or using a combination approach. Studies have not, however, explicitly explored faculty's thought processes while tackling clinical problems through the lens of dual process theory to inform this debate. Further, these thought processes have not been studied in relation to the difficulty of the task or other potential mediating influences such as personal factors and fatigue, which could also be influenced by personal factors such as sleep deprivation. We therefore sought to determine which reasoning process(es) were used with answering clinically oriented multiple-choice questions (MCQs) and if these processes differed based on the dual process theory characteristics: accuracy, reading time and answering time as well as psychometrically determined item difficulty and sleep deprivation. We performed a think-aloud procedure to explore faculty's thought processes while taking these MCQs, coding think-aloud data based on reasoning process (analytic, nonanalytic, guessing or combination of processes) as well as word count, number of stated concepts, reading time, answering time, and accuracy. We also included questions regarding amount of work in the recent past. We then conducted statistical analyses to examine the associations between these measures such as correlations between frequencies of reasoning processes and item accuracy and difficulty. We also observed the total frequencies of different reasoning processes in the situations of getting answers correctly and incorrectly. Regardless of whether the questions were classified as 'hard' or 'easy', non-analytical reasoning led to the correct answer more often than to an incorrect answer. Significant correlations were found between self-reported recent number of hours worked with think-aloud word count

  18. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    NASA Astrophysics Data System (ADS)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  19. Movement trajectory smoothness is not associated with the endpoint accuracy of rapid multi-joint arm movements in young and older adults

    PubMed Central

    Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George

    2013-01-01

    The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101

  20. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.