Sample records for simplicity compromise accuracy

  1. Refinements in a viscoplastic model

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1989-01-01

    A thermodynamically admissible theory of viscoplasticity with two internal variables (a back stress and a drag strength) is presented. Six material functions characterize a specific viscoplastic model. In the pursuit of compromise between accuracy and simplicity, a model is developed that is a hybrid of two existing viscoplastic models. A limited number of applications of the model to Al, Cu, and Ni are presented. A novel implicit integration method is also discussed. Applications are made to obtain solutions using this viscoplastic model.

  2. Acoustic transient classification with a template correlation processor.

    PubMed

    Edwards, R T

    1999-10-01

    I present an architecture for acoustic pattern classification using trinary-trinary template correlation. In spite of its computational simplicity, the algorithm and architecture represent a method which greatly reduces bandwidth of the input, storage requirements of the classifier memory, and power consumption of the system without compromising classification accuracy. The linear system should be amenable to training using recently-developed methods such as Independent Component Analysis (ICA), and we predict that behavior will be qualitatively similar to that of structures in the auditory cortex.

  3. Validation of Non-Invasive Tracer Kinetic Analysis of 18F-Florbetaben PET Using a Dual Time-Window Acquisition Protocol.

    PubMed

    Bullich, Santiago; Barthel, Henryk; Koglin, Norman; Becker, Georg A; De Santi, Susan; Jovalekic, Aleksandar; Stephens, Andrew W; Sabri, Osama

    2017-11-24

    Accurate amyloid PET quantification is necessary for monitoring amyloid-beta accumulation and response to therapy. Currently, most of the studies are analyzed using the static standardized uptake value ratio (SUVR) approach because of its simplicity. However, this approach may be influenced by changes in cerebral blood flow (CBF) or radiotracer clearance. Full tracer kinetic models require arterial blood sampling and dynamic image acquisition. The objectives of this work were: (1) to validate a non-invasive kinetic modeling approach for 18 F-florbetaben PET using an acquisition protocol with the best compromise between quantification accuracy and simplicity and (2) to assess the impact of CBF changes and radiotracer clearance on SUVRs and non-invasive kinetic modeling data in 18 F-florbetaben PET. Methods: Data from twenty subjects (10 patients with probable Alzheimer's dementia/ 10 healthy volunteers) were used to compare the binding potential (BP ND ) obtained from the full kinetic analysis to the SUVR and to non-invasive tracer kinetic methods (simplified reference tissue model (SRTM), and multilinear reference tissue model 2 (MRTM2)). Different approaches using shortened or interrupted acquisitions were compared to the results of the full acquisition (0-140 min). Simulations were carried out to assess the effect of CBF and radiotracer clearance changes on SUVRs and non-invasive kinetic modeling outputs. Results: A 0-30 and 120-140 min dual time-window acquisition protocol using appropriate interpolation of the missing time points provided the best compromise between patient comfort and quantification accuracy. Excellent agreement was found between BP ND obtained using full and dual time-window (2TW) acquisition protocols (BP ND,2TW =0.01+ 1.00 BP ND,FULL , R2=0.97 (MRTM2); BP ND,2TW = 0.05+ 0.92·BP ND,FULL , R2=0.93 (SRTM)). Simulations showed a limited impact of CBF and radiotracer clearance changes on MRTM parameters and SUVRs. Conclusion: This study demonstrates accurate non-invasive kinetic modeling of 18 F-florbetaben PET data using a dual time-window acquisition protocol, thus providing a good compromise between quantification accuracy, scan duration and patient burden. The influence of CBF and radiotracer clearance changes on amyloid-beta load estimates was small. For most clinical research applications, the SUVR approach is appropriate. However, for longitudinal studies in which a maximum quantification accuracy is desired, this non-invasive dual time-window acquisition protocol and kinetic analysis is recommended. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  4. Modeling and analysis of cascade solar cells

    NASA Technical Reports Server (NTRS)

    Ho, F. D.

    1986-01-01

    A brief review is given of the present status of the development of cascade solar cells. It is known that photovoltaic efficiencies can be improved through this development. The designs and calculations of the multijunction cells, however, are quite complicated. The main goal is to find a method which is a compromise between accuracy and simplicity for modeling a cascade solar cell. Three approaches are presently under way, among them (1) equivalent circuit approach, (2) numerical approach, and (3) analytical approach. Here, the first and the second approaches are discussed. The equivalent circuit approach using SPICE (Simulation Program, Integrated Circuit Emphasis) to the cascade cells and the cascade-cell array is highlighted. The methods of extracting parameters for modeling are discussed.

  5. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy.

    PubMed

    Soong, Ming Foong; Ramli, Rahizar; Saifizul, Ahmad

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details.

  6. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy

    PubMed Central

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details. PMID:28617819

  7. Interface concerns of ejector integration in V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Lowry, R. B.

    1979-01-01

    A number of areas which have in the past contributed to weight, complexity, and thrust losses in the ejector-powered V/STOL vehicle were identified. Most of these interfaces taken singly do not represent a severe compromise to the vehicle; however, the bottom line is that the sum of compromises and the subsequent effects on performance, flight operations and maintenance have rendered the ejector V/STOL aircraft unattractive. In addition to some of the unique ejector/aircraft integration problems, the vehicle by virtue of having a V/STOL capability, is compromised in other areas. To be successful and acceptable, the advantages must outweight the disadvantages and simplicity with minimum penalties must be the rule. It is concluded that more emphasis must be placed on the ejector/aircraft interface for the concept to be successful.

  8. Team science for science communication.

    PubMed

    Wong-Parodi, Gabrielle; Strauss, Benjamin H

    2014-09-16

    Natural scientists from Climate Central and social scientists from Carnegie Mellon University collaborated to develop science communications aimed at presenting personalized coastal flood risk information to the public. We encountered four main challenges: agreeing on goals; balancing complexity and simplicity; relying on data, not intuition; and negotiating external pressures. Each challenge demanded its own approach. We navigated agreement on goals through intensive internal communication early on in the project. We balanced complexity and simplicity through evaluation of communication materials for user understanding and scientific content. Early user test results that overturned some of our intuitions strengthened our commitment to testing communication elements whenever possible. Finally, we did our best to negotiate external pressures through regular internal communication and willingness to compromise.

  9. A multi-user real time inventorying system for radioactive materials: a networking approach.

    PubMed

    Mehta, S; Bandyopadhyay, D; Hoory, S

    1998-01-01

    A computerized system for radioisotope management and real time inventory coordinated across a large organization is reported. It handles hundreds of individual users and their separate inventory records. Use of highly efficient computer network and database technologies makes it possible to accept, maintain, and furnish all records related to receipt, usage, and disposal of the radioactive materials for the users separately and collectively. The system's central processor is an HP-9000/800 G60 RISC server and users from across the organization use their personal computers to login to this server using the TCP/IP networking protocol, which makes distributed use of the system possible. Radioisotope decay is automatically calculated by the program, so that it can make the up-to-date radioisotope inventory data of an entire institution available immediately. The system is specifically designed to allow use by large numbers of users (about 300) and accommodates high volumes of data input and retrieval without compromising simplicity and accuracy. Overall, it is an example of a true multi-user, on-line, relational database information system that makes the functioning of a radiation safety department efficient.

  10. Comparison of AGE and Spectral Methods for the Simulation of Far-Wakes

    NASA Technical Reports Server (NTRS)

    Bisset, D. K.; Rogers, M. M.; Kega, Dennis (Technical Monitor)

    1999-01-01

    Turbulent flow simulation methods based on finite differences are attractive for their simplicity, flexibility and efficiency, but not always for accuracy or stability. This report demonstrates that a good compromise is possible with the Advected Grid Explicit (AGE) method. AGE has proven to be both efficient and accurate for simulating turbulent free-shear flows, including planar mixing layers and planar jets. Its efficiency results from its localized fully explicit finite difference formulation (Bisset 1998a,b) that is very straightforward to compute, outweighing the need for a fairly small timestep. Also, most of the successful simulations were slightly under-resolved, and therefore they were, in effect, large-eddy simulations (LES) without a sub-grid-scale (SGS) model, rather than direct numerical simulations (DNS). The principle is that the role of the smallest scales of turbulent motion (when the Reynolds number is not too low) is to dissipate turbulent energy, and therefore they do not have to be simulated when the numerical method is inherently dissipative at its resolution limits. Such simulations are termed 'auto-LES' (LES with automatic SGS modeling) in this report.

  11. Increase in the Accuracy of Calculating Length of Horizontal Cable SCS in Civil Engineering

    NASA Astrophysics Data System (ADS)

    Semenov, A.

    2017-11-01

    A modification of the method for calculating the horizontal cable consumption of SCS established at civil engineering facilities is proposed. The proposed procedure preserves the prototype simplicity and provides a 5 percent accuracy increase. The values of the achieved accuracy are justified, their compliance with the practice of real projects is proved. The method is brought to the level of the engineering algorithm and formalized in the form of 12/70 rule.

  12. CYBER SUPPLY CHAIN SECURITY: CAN THE BACKDOOR BE CLOSED WITH TRUSTED DESIGN, MANUFACTURING AND SUPPLY

    DTIC Science & Technology

    2016-08-01

    components from making it into DoD systems. The benefits of trusted design and manufacturing would likely cost more, but would confidently minimize DoD...compromise products too high for an attacker. If the costs and effort needed are greater than the benefit to conduct an attack, malicious actors are...simplicity may be a better approach. While there are potential benefits to built-in hardware and software security, there may be just as many

  13. Image-guided positioning and tracking.

    PubMed

    Ruan, Dan; Kupelian, Patrick; Low, Daniel A

    2011-01-01

    Radiation therapy aims at maximizing tumor control while minimizing normal tissue complication. The introduction of stereotactic treatment explores the volume effect and achieves dose escalation to tumor target with small margins. The use of ablative irradiation dose and sharp dose gradients requires accurate tumor definition and alignment between patient and treatment geometry. Patient geometry variation during treatment may significantly compromise the conformality of delivered dose and must be managed properly. Setup error and interfraction/intrafraction motion are incorporated in the target definition process by expanding the clinical target volume to planning target volume, whereas the alignment between patient and treatment geometry is obtained with an adaptive control process, by taking immediate actions in response to closely monitored patient geometry. This article focuses on the monitoring and adaptive response aspect of the problem. The term "image" in "image guidance" will be used in a most general sense, to be inclusive of some important point-based monitoring systems that can be considered as degenerate cases of imaging. Image-guided motion adaptive control, as a comprehensive system, involves a hierarchy of decisions, each of which balances simplicity versus flexibility and accuracy versus robustness. Patient specifics and machine specifics at the treatment facility also need to be incorporated into the decision-making process. Identifying operation bottlenecks from a system perspective and making informed compromises are crucial in the proper selection of image-guidance modality, the motion management mechanism, and the respective operation modes. Not intended as an exhaustive exposition, this article focuses on discussing the major issues and development principles for image-guided motion management systems. We hope these information and methodologies will facilitate conscientious practitioners to adopt image-guided motion management systems accounting for patient and institute specifics and to embrace advances in knowledge and new technologies subsequent to the publication of this article.

  14. AIC and the challenge of complexity: A case study from ecology.

    PubMed

    Moll, Remington J; Steel, Daniel; Montgomery, Robert A

    2016-12-01

    Philosophers and scientists alike have suggested Akaike's Information Criterion (AIC), and other similar model selection methods, show predictive accuracy justifies a preference for simplicity in model selection. This epistemic justification of simplicity is limited by an assumption of AIC which requires that the same probability distribution must generate the data used to fit the model and the data about which predictions are made. This limitation has been previously noted but appears to often go unnoticed by philosophers and scientists and has not been analyzed in relation to complexity. If predictions are about future observations, we argue that this assumption is unlikely to hold for models of complex phenomena. That in turn creates a practical limitation for simplicity's AIC-based justification because scientists modeling such phenomena are often interested in predicting the future. We support our argument with an ecological case study concerning the reintroduction of wolves into Yellowstone National Park, U.S.A. We suggest that AIC might still lend epistemic support for simplicity by leading to better explanations of complex phenomena. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Nondynamic Tracking Using The Global Positioning System

    NASA Technical Reports Server (NTRS)

    Yunck, T. P.; Wu, Sien-Chong

    1988-01-01

    Report describes technique for using Global Positioning System (GPS) to determine position of low Earth orbiter without need for dynamic models. Differential observing strategy requires GPS receiver on user vehicle and network of six ground receivers. Computationally efficient technique delivers decimeter accuracy on orbits down to lowest altitudes. New technique nondynamic long-arc strategy having potential for accuracy of best dynamic techniques while retaining much of computational simplicity of geometric techniques.

  16. A procedure for classifying textural facies in gravel‐bed rivers

    USGS Publications Warehouse

    Buffington, John M.; Montgomery, David R.

    1999-01-01

    Textural patches (i.e., grain‐size facies) are commonly observed in gravel‐bed channels and are of significance for both physical and biological processes at subreach scales. We present a general framework for classifying textural patches that allows modification for particular study goals, while maintaining a basic degree of standardization. Textures are classified using a two‐tier system of ternary diagrams that identifies the relative abundance of major size classes and subcategories of the dominant size. An iterative procedure of visual identification and quantitative grain‐size measurement is used. A field test of our classification indicates that it affords reasonable statistical discrimination of median grain size and variance of bed‐surface textures. We also explore the compromise between classification simplicity and accuracy. We find that statistically meaningful textural discrimination requires use of both tiers of our classification. Furthermore, we find that simplified variants of the two‐tier scheme are less accurate but may be more practical for field studies which do not require a high level of textural discrimination or detailed description of grain‐size distributions. Facies maps provide a natural template for stratifying other physical and biological measurements and produce a retrievable and versatile database that can be used as a component of channel monitoring efforts.

  17. Rephrasing Faraday's Law

    ERIC Educational Resources Information Center

    Hill, S. Eric

    2010-01-01

    As physics educators, we must often find the balance between simplicity and accuracy. Particularly in introductory courses, it can be a struggle to give students the level of understanding for which they're ready without misrepresenting reality. Of course, it's in these introductory courses that our students begin to construct the conceptual…

  18. In situ monitoring using Lab on Chip devices, with particular reference to dissolved silica.

    NASA Astrophysics Data System (ADS)

    Turner, G. S. C.; Loucaides, S.; Slavik, G. J.; Owsianka, D. R.; Beaton, A.; Nightingale, A.; Mowlem, M. C.

    2016-02-01

    In situ sensors are attractive alternatives to discrete sampling of natural waters, offering the potential for sustained long term monitoring and eliminating the need for sample handling. This can reduce sample contamination and degradation. In addition, sensors can be clustered into multi-parameter observatories and networked to provide both spatial and time series coverage. High resolution, low cost, and long term monitoring are the biggest advantages of these technologies to oceanographers. Microfluidic technology miniaturises bench-top assay systems into portable devices, known as a `lab on a chip' (LOC). The principle advantages of this technology are low power consumption, simplicity, speed, and stability without compromising on quality (accuracy, precision, selectivity, sensitivity). We have successfully demonstrated in situ sensors based on this technology for the measurement of pH, nitrate and nitrite. Dissolved silica (dSi) is an important macro-nutrient supporting a major fraction of oceanic primary production carried out by diatoms. The biogeochemical Si cycle is undergoing significant modifications due to human activities, which affects availability of dSi, and consequently primary production. Monitoring dSi concentrations is therefore critical in increasing our understanding of the biogeochemical Si cycle to predict and manage anthropogenic perturbations. The standard bench top air segmented flow technique utilising the reduction of silicomolybdic acid with spectrophotometric detection has been miniaturised into a LOC system; the target limit of detection is 1 nM, with ± 5% accuracy and 3% precision. Results from the assay optimisation are presented along with reagent shelf life to demonstrate the robustness of the chemistry. Laboratory trials of the sensor using ideal solutions and environmental samples in environmentally relevant conditions (temperature, pressure) are discussed, along with an overview of our current LOC analytical capabilities.

  19. A design for integration.

    PubMed

    Fenna, D

    1977-09-01

    For nearly two decades, the development of computerized information systems has struggled for acceptable compromises between the unattainable "total system" and the unacceptable separate applications. Integration of related applications is essential if the computer is to be exploited fully, yet relative simplicity is necessary for systems to be implemented in a reasonable time-scale. This paper discusses a system being progressively developed from minimal beginnings but which, from the outset, had a highly flexible and fully integrated system basis. The system is for batch processing, but can accommodate on-line data input; it is similar in its approach to many transaction-processing real-time systems.

  20. Use of causative variants and SNP weighting in a single-step GBLUP context

    USDA-ARS?s Scientific Manuscript database

    Much effort has been recently put into identifying causative quantitative trait nucleotides (QTN) in animal breeding, aiming genomic prediction. Among the genomic methods available, single-step GBLUP (ssGBLUP) became the choice because of its simplicity and potentially higher accuracy. When QTN are ...

  1. Noise, cost and speed-accuracy trade-offs: decision-making in a decentralized system

    PubMed Central

    Marshall, James A.R.; Dornhaus, Anna; Franks, Nigel R.; Kovacs, Tim

    2005-01-01

    Many natural and artificial decision-making systems face decision problems where there is an inherent compromise between two or more objectives. One such common compromise is between the speed and accuracy of a decision. The ability to exploit the characteristics of a decision problem in order to vary between the extremes of making maximally rapid, or maximally accurate decisions, is a useful property of such systems. Colonies of the ant Temnothorax albipennis (formerly Leptothorax albipennis) are a paradigmatic decentralized decision-making system, and have been shown flexibly to compromise accuracy for speed when making decisions during house-hunting. During emigration, a colony must typically evaluate and choose between several possible alternative new nest sites of differing quality. In this paper, we examine this speed-accuracy trade-off through modelling, and conclude that noise and time-cost of assessing alternative choices are likely to be significant for T. albipennis. Noise and cost of such assessments are likely to mean that T. albipennis' decision-making mechanism is Pareto-optimal in one crucial regard; increasing the willingness of individuals to change their decisions cannot improve collective accuracy overall without impairing speed. We propose that a decentralized control algorithm based on this emigration behaviour may be derived for applications in engineering domains and specify the characteristics of the problems to which it should be suited, based on our new results. PMID:16849234

  2. Contribution of Regional White Matter Integrity to Visuospatial Construction Accuracy, Organizational Strategy, and Memory for a Complex Figure in Abstinent Alcoholics.

    PubMed

    Rosenbloom, Margaret J; Sassoon, Stephanie A; Pfefferbaum, Adolf; Sullivan, Edith V

    2009-12-01

    Visuospatial construction ability as used in drawing complex figures is commonly impaired in chronic alcoholics, but memory for such information can be enhanced by use of a holistic drawing strategy during encoding. We administered the Rey-Osterrieth Complex Figure Test (ROCFT) to 41 alcoholic and 38 control men and women and assessed the contribution of diffusion tensor imaging (DTI) measures of integrity of selected white matter tracts to ROCFT copy accuracy, copy strategy, and recall accuracy. Although alcoholics copied the figure less accurately than controls, a more holistic strategy at copy was associated with better recall in both groups. Greater radial diffusivity, reflecting compromised myelin integrity, in occipital forceps and external capsule was associated with poorer copy accuracy in both groups. Lower FA, reflecting compromised fiber microstructure in the inferior cingulate bundle, which links frontal and medial temporal episodic memory systems, was associated with piecemeal copy strategy and poorer immediate recall in the alcoholics. The correlations were generally modest and should be considered exploratory. To the extent that the inferior cingulate was relatively spared in alcoholics, it may have provided an alternative pathway to the compromised frontal system for successful copy strategy and, by extension, aided recall.

  3. A portable self-sensing rheometer for investigation and therapy of swallowing disorders.

    PubMed

    O'Leary, Mark T; Hanson, Ben

    2010-01-01

    Dysphagia is a medical condition in which the safety or efficiency of eating and drinking is compromised. Thin, watery fluids flow too quickly through the oral anatomy during an abnormal swallow, pre-empting airway protective mechanisms, and potentially resulting in fluid entry into the lung. Dysphagia therapy consists of reducing flow speed during swallowing by increasing fluid viscosity using thickeners. Bolus viscosity must be specified and presented to the patient within a well-defined range for effective therapy. Thickeners produce non-Newtonian fluids, rendering current subjective methods for fluid assessment unreliable. Widespread quantification of fluid viscosity is presently impractical as rheometers are costly and complicated to use. Alternative techniques also have disadvantages such as operation at shear rates inappropriate to fluid use. A simple and inexpensive rheometer has been constructed to remedy this situation using a self-sensing electromagnetic actuator. This avoids the need for separate force and displacement sensors, with benefits for simplicity and robustness. The actuator and fluid interface were designed for viscosities consistent with those used for dysphagia therapy. The self-sensing rheometer was found to be able to resolve the different dynamic viscosities obtained from three commonly used therapeutic fluid consistency levels in close agreement with results from a reference laboratory rheometer. Widespread use of the rheometer could remove the subjectivity of fluid assessment, increasing accuracy of fluid specification and therapy across all consistencies and fluid types.

  4. Good and Bad Public Prose.

    ERIC Educational Resources Information Center

    Cockburn, Stewart

    1969-01-01

    The basic requirements of all good prose are clarity, accuracy, brevity, and simplicity. Especially in public prose--in which the meaning is the crux of the article or speech--concise, vigorous English demands a minimum of adjectives, a maximum use of the active voice, nouns carefully chosen, a logical argument with no labored or obscure points,…

  5. How Many Subjects Does It Take to Do a Regression Analysis?

    ERIC Educational Resources Information Center

    Green, Samuel B.

    1991-01-01

    An evaluation of the rules-of-thumb used to determine the minimum number of subjects required to conduct multiple regression analyses suggests that researchers who use a rule of thumb rather than power analyses trade simplicity of use for accuracy and specificity of response. Insufficient power is likely to result. (SLD)

  6. Recording wildlife locations with the Universal Transverse Mercator (UTM) grid system

    Treesearch

    T. G. Grubb; W. L. Eakle

    1988-01-01

    The Universal Transverse Mercator (UTM) international, planar, grid system is described and shown to offer greater simplicity, efficiency and accuracy for plotting wildlife locations than the more familiar Latitude-Longitude (Latilong) and Section-Township-Range (Cadastral) systems, and the State planar system. Use of the UTM system is explained with examples.

  7. A new equation of state for better liquid density prediction of natural gas systems

    NASA Astrophysics Data System (ADS)

    Nwankwo, Princess C.

    Equations of state formulations, modifications and applications have remained active research areas since the success of van der Waal's equation in 1873. The need for better reservoir fluid modeling and characterization is of great importance to petroleum engineers who deal with thermodynamic related properties of petroleum fluids at every stage of the petroleum "life span" from its drilling, to production through the wellbore, to transportation, metering and storage. Equations of state methods are far less expensive (in terms of material cost and time) than laboratory or experimental forages and the results are interestingly not too far removed from the limits of acceptable accuracy. In most cases, the degree of accuracy obtained, by using various EOS's, though not appreciable, have been acceptable when considering the gain in time. The possibility of obtaining an equation of state which though simple in form and in use, could have the potential of further narrowing the present existing bias between experimentally determined and popular EOS estimated results spurred the interest that resulted in this study. This research study had as its chief objective, to develop a new equation of state that would more efficiently capture the thermodynamic properties of gas condensate fluids, especially the liquid phase density, which is the major weakness of other established and popular cubic equations of state. The set objective was satisfied by a new semi analytical cubic three parameter equation of state, derived by the modification of the attraction term contribution to pressure of the van der Waal EOS without compromising either structural simplicity or accuracy of estimating other vapor liquid equilibria properties. The application of new EOS to single and multi-component light hydrocarbon fluids recorded far lower error values than does the popular two parameter, Peng-Robinson's (PR) and three parameter Patel-Teja's (PT) equations of state. Furthermore, this research was able to extend the application of the generalized cubic equation of Coats (1985) to three parameter cubic equations of state, a feat, not yet recorded by any author in literature.

  8. Theta Neurofeedback Effects on Motor Memory Consolidation and Performance Accuracy: An Apparent Paradox?

    PubMed

    Reiner, Miriam; Lev, Dror D; Rosen, Amit

    2018-05-15

    Previous studies have shown that theta neurofeedback enhances motor memory consolidation on an easy-to-learn finger-tapping task. However, the simplicity of the finger-tapping task precludes evaluating the putative effects of elevated theta on performance accuracy. Mastering a motor sequence is classically assumed to entail faster performance with fewer errors. The speed-accuracy tradeoff (SAT) principle states that as action speed increases, motor performance accuracy decreases. The current study investigated whether theta neurofeedback could improve both performance speed and performance accuracy, or would only enhance performance speed at the cost of reduced accuracy. A more complex task was used to study the effects of parietal elevated theta on 45 healthy volunteers The findings confirmed previous results on the effects of theta neurofeedback on memory consolidation. In contrast to the two control groups, in the theta-neurofeedback group the speed-accuracy tradeoff was reversed. The speed-accuracy tradeoff patterns only stabilized after a night's sleep implying enhancement in terms of both speed and accuracy. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Accuracy increase of self-compensator

    NASA Astrophysics Data System (ADS)

    Zhambalova, S. Ts; Vinogradova, A. A.

    2018-03-01

    In this paper, the authors consider a self-compensation system and a method for increasing its accuracy, without compromising the condition of the information theory of measuring devices. The result can be achieved using the pulse control of the tracking system in the dead zone (the zone of the proportional section of the amplifier's characteristic). Pulse control allows one to increase the control power, but the input signal of the amplifier is infinitesimal. To do this, the authors use the conversion scheme for the input quantity. It is also possible to reduce the dead band, but the system becomes unstable. The amount of information received from the instrument, correcting circuits complicates the system, and, reducing the feedback coefficient dramatically, reduces the speed. Thanks to this, without compromising the measurement condition, the authors increase the accuracy of the self-compensation system. The implementation technique allows increasing the power of the input signal by many orders of magnitude.

  10. Quantifying the Effect of Polymer Blending through Molecular Modelling of Cyanurate Polymers

    PubMed Central

    Crawford, Alasdair O.; Hamerton, Ian; Cavalli, Gabriel; Howlin, Brendan J.

    2012-01-01

    Modification of polymer properties by blending is a common practice in the polymer industry. We report here a study of blends of cyanurate polymers by molecular modelling that shows that the final experimentally determined properties can be predicted from first principles modelling to a good degree of accuracy. There is always a compromise between simulation length, accuracy and speed of prediction. A comparison of simulation times shows that 125ps of molecular dynamics simulation at each temperature provides the optimum compromise for models of this size with current technology. This study opens up the possibility of computer aided design of polymer blends with desired physical and mechanical properties. PMID:22970230

  11. Paper-based microfluidics with an erodible polymeric bridge giving controlled release and timed flow shutoff.

    PubMed

    Jahanshahi-Anbuhi, Sana; Henry, Aleah; Leung, Vincent; Sicard, Clémence; Pennings, Kevin; Pelton, Robert; Brennan, John D; Filipe, Carlos D M

    2014-01-07

    Water soluble pullulan films were formatted into paper-based microfluidic devices, serving as a controlled time shutoff valve. The utility of the valve was demonstrated by a one-step, fully automatic implementation of a complex pesticide assay requiring timed, sequential exposure of an immobilized enzyme layer to separate liquid streams. Pullulan film dissolution and the capillary wicking of aqueous solutions through the device were measured and modeled providing valve design criteria. The films dissolve mainly by surface erosion, meaning the film thickness mainly controls the shutoff time. This method can also provide time-dependent sequential release of reagents without compromising the simplicity and low cost of paper-based devices.

  12. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  13. 12 Texts That Facilitate Authentic Reading Strategies for Novice, Experimenting, and Proficient Readers

    ERIC Educational Resources Information Center

    Hill, K. Dara

    2017-01-01

    The current climate of reading instruction calls for fluency strategies that stress automaticity, accuracy, and prosody, within the scope of prescribed reading programs that compromise teacher autonomy, with texts that are often irrelevant to the students' experiences. Consequently, accuracy and speed are developed, but deep comprehension is…

  14. An ingestible temperature-transmitter

    NASA Technical Reports Server (NTRS)

    Pope, J. M.; Fryer, T. B.; Sandler, H.

    1972-01-01

    Pill-sized transmitter measures deep body temperature in studies of circadian rhythm and indicates general health. Ingestible device is a compromise between accuracy, circuit complexity, size and transmission range.

  15. Fabrication and Structural Design of Micro Pressure Sensors for Tire Pressure Measurement Systems (TPMS).

    PubMed

    Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao

    2009-01-01

    In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy.

  16. Seeing the elephant: Parsimony, functionalism, and the emergent design of contempt and other sentiments.

    PubMed

    Gervais, Matthew M; Fessler, Daniel M T

    2017-01-01

    The target article argues that contempt is a sentiment, and that sentiments are the deep structure of social affect. The 26 commentaries meet these claims with a range of exciting extensions and applications, as well as critiques. Most significantly, we reply that construction and emergence are necessary for, not incompatible with, evolved design, while parsimony requires explanatory adequacy and predictive accuracy, not mere simplicity.

  17. Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model

    NASA Astrophysics Data System (ADS)

    Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.

    2000-02-01

    The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.

  18. Addiction recovery: its definition and conceptual boundaries.

    PubMed

    White, William L

    2007-10-01

    The addiction field's failure to achieve consensus on a definition of "recovery" from severe and persistent alcohol and other drug problems undermines clinical research, compromises clinical practice, and muddles the field's communications to service constituents, allied service professionals, the public, and policymakers. This essay discusses 10 questions critical to the achievement of such a definition and offers a working definition of recovery that attempts to meet the criteria of precision, inclusiveness, exclusiveness, measurability, acceptability, and simplicity. The key questions explore who has professional and cultural authority to define recovery, the defining ingredients of recovery, the boundaries (scope and depth) of recovery, and temporal benchmarks of recovery (when recovery begins and ends). The process of defining recovery touches on some of the most controversial issues within the addictions field.

  19. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.

  20. Fabrication and Structural Design of Micro Pressure Sensors for Tire Pressure Measurement Systems (TPMS)

    PubMed Central

    Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao

    2009-01-01

    In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy. PMID:22573960

  1. Vibrationally averaged post Born-Oppenheimer isotopic dipole moment calculations approaching spectroscopic accuracy.

    PubMed

    Arapiraca, A F C; Jonsson, Dan; Mohallem, J R

    2011-12-28

    We report an upgrade of the Dalton code to include post Born-Oppenheimer nuclear mass corrections in the calculations of (ro-)vibrational averages of molecular properties. These corrections are necessary to achieve an accuracy of 10(-4) debye in the calculations of isotopic dipole moments. Calculations on the self-consistent field level present this accuracy, while numerical instabilities compromise correlated calculations. Applications to HD, ethane, and ethylene isotopologues are implemented, all of them approaching the experimental values.

  2. Numerical Solutions of the Nonlinear Fractional-Order Brusselator System by Bernstein Polynomials

    PubMed Central

    Khan, Rahmat Ali; Tajadodi, Haleh; Johnston, Sarah Jane

    2014-01-01

    In this paper we propose the Bernstein polynomials to achieve the numerical solutions of nonlinear fractional-order chaotic system known by fractional-order Brusselator system. We use operational matrices of fractional integration and multiplication of Bernstein polynomials, which turns the nonlinear fractional-order Brusselator system to a system of algebraic equations. Two illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques. PMID:25485293

  3. Radiological interpretation of images displayed on tablet computers: a systematic review.

    PubMed

    Caffery, L J; Armfield, N R; Smith, A C

    2015-06-01

    To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad(®) (Apple, Cupertino, CA). The included studies reported high sensitivity (84-98%), specificity (74-100%) and accuracy rates (98-100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. The iPad may be appropriate for an on-call radiologist to use for radiological interpretation.

  4. Advanced Launch System advanced development oxidizer turbopump program: Technical implementation plan

    NASA Technical Reports Server (NTRS)

    Ferlita, F.

    1989-01-01

    The Advanced Launch Systems (ALS) Advanced Development Oxidizer Turbopump Program has designed, fabricated and demonstrated a low cost, highly reliable oxidizer turbopump for the Space Transportation Engine that minimizes the recurring cost for the ALS engines. Pratt and Whitney's (P and W's) plan for integrating the analyses, testing, fabrication, and other program efforts is addressed. This plan offers a comprehensive description of the total effort required to design, fabricate, and test the ALS oxidizer turbopump. The proposed ALS oxidizer turbopump reduces turbopump costs over current designs by taking advantage of design simplicity and state-of-the-art materials and producibility features without compromising system reliability. This is accomplished by selecting turbopump operating conditions that are within known successful operating regions and by using proven manufacturing techniques.

  5. An analytical model with flexible accuracy for deep submicron DCVSL cells

    NASA Astrophysics Data System (ADS)

    Valiollahi, Sepideh; Ardeshir, Gholamreza

    2018-07-01

    Differential cascoded voltage switch logic (DCVSL) cells are among the best candidates of circuit designers for a wide range of applications due to advantages such as low input capacitance, high switching speed, small area and noise-immunity; nevertheless, a proper model has not yet been developed to analyse them. This paper analyses deep submicron DCVSL cells based on a flexible accuracy-simplicity trade-off including the following key features: (1) the model is capable of producing closed-form expressions with an acceptable accuracy; (2) model equations can be solved numerically to offer higher accuracy; (3) the short-circuit currents occurring in high-low/low-high transitions are accounted in analysis and (4) the changes in the operating modes of transistors during transitions together with an efficient submicron I-V model, which incorporates the most important non-ideal short-channel effects, are considered. The accuracy of the proposed model is validated in IBM 0.13 µm CMOS technology through comparisons with the accurate physically based BSIM3 model. The maximum error caused by analytical solutions is below 10%, while this amount is below 7% for numerical solutions.

  6. The value of predicting restriction of fetal growth and compromise of its wellbeing: Systematic quantitative overviews (meta-analysis) of test accuracy literature.

    PubMed

    Morris, Rachel K; Khan, Khalid S; Coomarasamy, Aravinthan; Robson, Stephen C; Kleijnen, Jos

    2007-03-08

    Restriction of fetal growth and compromise of fetal wellbeing remain significant causes of perinatal death and childhood disability. At present, there is a lack of scientific consensus about the best strategies for predicting these conditions before birth. Therefore, there is uncertainty about the best management of pregnant women who might have a growth restricted baby. This is likely to be due to a dearth of clear collated information from individual research studies drawn from different sources on this subject. A series of systematic reviews and meta-analyses will be undertaken to determine, among pregnant women, the accuracy of various tests to predict and/or diagnose fetal growth restriction and compromise of fetal wellbeing. We will search Medline, Embase, Cochrane Library, MEDION, citation lists of review articles and eligible primary articles and will contact experts in the field. Independent reviewers will select studies, extract data and assess study quality according to established criteria. Language restrictions will not be applied. Data synthesis will involve meta-analysis (where appropriate), exploration of heterogeneity and publication bias. The project will collate and synthesise the available evidence regarding the value of the tests for predicting restriction of fetal growth and compromise of fetal wellbeing. The systematic overviews will assess the quality of the available evidence, estimate the magnitude of potential benefits, identify those tests with good predictive value and help formulate practice recommendations.

  7. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  8. Discontinuity Detection in the Shield Metal Arc Welding Process

    PubMed Central

    Cocota, José Alberto Naves; Garcia, Gabriel Carvalho; da Costa, Adilson Rodrigues; de Lima, Milton Sérgio Fernandes; Rocha, Filipe Augusto Santos; Freitas, Gustavo Medeiros

    2017-01-01

    This work proposes a new methodology for the detection of discontinuities in the weld bead applied in Shielded Metal Arc Welding (SMAW) processes. The detection system is based on two sensors—a microphone and piezoelectric—that acquire acoustic emissions generated during the welding. The feature vectors extracted from the sensor dataset are used to construct classifier models. The approaches based on Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers are able to identify with a high accuracy the three proposed weld bead classes: desirable weld bead, shrinkage cavity and burn through discontinuities. Experimental results illustrate the system’s high accuracy, greater than 90% for each class. A novel Hierarchical Support Vector Machine (HSVM) structure is proposed to make feasible the use of this system in industrial environments. This approach presented 96.6% overall accuracy. Given the simplicity of the equipment involved, this system can be applied in the metal transformation industries. PMID:28489045

  9. Discontinuity Detection in the Shield Metal Arc Welding Process.

    PubMed

    Cocota, José Alberto Naves; Garcia, Gabriel Carvalho; da Costa, Adilson Rodrigues; de Lima, Milton Sérgio Fernandes; Rocha, Filipe Augusto Santos; Freitas, Gustavo Medeiros

    2017-05-10

    This work proposes a new methodology for the detection of discontinuities in the weld bead applied in Shielded Metal Arc Welding (SMAW) processes. The detection system is based on two sensors-a microphone and piezoelectric-that acquire acoustic emissions generated during the welding. The feature vectors extracted from the sensor dataset are used to construct classifier models. The approaches based on Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers are able to identify with a high accuracy the three proposed weld bead classes: desirable weld bead, shrinkage cavity and burn through discontinuities. Experimental results illustrate the system's high accuracy, greater than 90% for each class. A novel Hierarchical Support Vector Machine (HSVM) structure is proposed to make feasible the use of this system in industrial environments. This approach presented 96.6% overall accuracy. Given the simplicity of the equipment involved, this system can be applied in the metal transformation industries.

  10. Laparoscopic insertion of the Moss feeding tube.

    PubMed

    Albrink, M H; Hagan, K; Rosemurgy, A S

    1993-12-01

    Placement of enteral feeding tubes is an important part of a surgeon's skill base. Surgical insertion of feeding tubes has been performed safely for many years with very few modifications. With the recent surge in interest and applicability of other laparoscopic procedures, it is well within the skills of the average laparoscopic surgeon to insert feeding tubes. We describe herein a simple technique for the insertion of the Moss feeding tube. The procedure described has a minimum of invasion, along with simplicity, safety, and accuracy.

  11. Radiological interpretation of images displayed on tablet computers: a systematic review

    PubMed Central

    Armfield, N R; Smith, A C

    2015-01-01

    Objective: To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. Methods: We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Results: 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad® (Apple, Cupertino, CA). The included studies reported high sensitivity (84–98%), specificity (74–100%) and accuracy rates (98–100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Conclusion: Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. Advances in knowledge: The iPad may be appropriate for an on-call radiologist to use for radiological interpretation. PMID:25882691

  12. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  13. Open Port Probe Sampling Interface for the Direct Coupling of Biocompatible Solid-Phase Microextraction to Atmospheric Pressure Ionization Mass Spectrometry.

    PubMed

    Gómez-Ríos, Germán Augusto; Liu, Chang; Tascon, Marcos; Reyes-Garcés, Nathaly; Arnold, Don W; Covey, Thomas R; Pawliszyn, Janusz

    2017-04-04

    In recent years, the direct coupling of solid phase microextraction (SPME) and mass spectrometry (MS) has shown its great potential to improve limits of quantitation, accelerate analysis throughput, and diminish potential matrix effects when compared to direct injection to MS. In this study, we introduce the open port probe (OPP) as a robust interface to couple biocompatible SPME (Bio-SPME) fibers to MS systems for direct electrospray ionization. The presented design consisted of minimal alterations to the front-end of the instrument and provided better sensitivity, simplicity, speed, wider compound coverage, and high-throughput in comparison to the LC-MS based approach. Quantitative determination of clenbuterol, fentanyl, and buprenorphine was successfully achieved in human urine. Despite the use of short extraction/desorption times (5 min/5 s), limits of quantitation below the minimum required performance levels (MRPL) set by the world antidoping agency (WADA) were obtained with good accuracy (≥90%) and linearity (R 2 > 0.99) over the range evaluated for all analytes using sample volumes of 300 μL. In-line technologies such as multiple reaction monitoring with multistage fragmentation (MRM 3 ) and differential mobility spectrometry (DMS) were used to enhance the selectivity of the method without compromising analysis speed. On the basis of calculations, once coupled to high throughput, this method can potentially yield preparation times as low as 15 s per sample based on the 96-well plate format. Our results demonstrated that Bio-SPME-OPP-MS efficiently integrates sampling/sample cleanup and atmospheric pressure ionization, making it an advantageous configuration for several bioanalytical applications, including doping in sports, in vivo tissue sampling, and therapeutic drug monitoring.

  14. Increasing Speed of Processing With Action Video Games

    PubMed Central

    Dye, Matthew W.G.; Green, C. Shawn; Bavelier, Daphne

    2010-01-01

    In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks beyond game situations. Video gaming may therefore provide an efficient training regimen to induce a general speeding of perceptual reaction times without decreases in accuracy of performance. PMID:20485453

  15. Air-Microfluidics: Creating Small, Low-cost, Portable Air Quality Sensors

    EPA Science Inventory

    Air-microfluidics shows great promise in dramatically reducing the size, cost, and power requirements of future air quality sensors without compromising their accuracy. Microfabrication provides a suite of relatively new tools for the development of micro electro mechanical syste...

  16. Application of Quasi-Linearization Techniques to Rail Vehicle Dynamic Analyses

    DOT National Transportation Integrated Search

    1978-11-01

    The objective of the work reported here was to define methods for applying the describing function technique to realistic models of nonlinear rail cars. The describing function method offers a compromise between the accuracy of nonlinear digital simu...

  17. A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation

    NASA Astrophysics Data System (ADS)

    Smith, David J.

    2018-04-01

    The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.

  18. Autocorrelated process control: Geometric Brownian Motion approach versus Box-Jenkins approach

    NASA Astrophysics Data System (ADS)

    Salleh, R. M.; Zawawi, N. I.; Gan, Z. F.; Nor, M. E.

    2018-04-01

    Existing of autocorrelation will bring a significant effect on the performance and accuracy of process control if the problem does not handle carefully. When dealing with autocorrelated process, Box-Jenkins method will be preferred because of the popularity. However, the computation of Box-Jenkins method is too complicated and challenging which cause of time-consuming. Therefore, an alternative method which known as Geometric Brownian Motion (GBM) is introduced to monitor the autocorrelated process. One real case of furnace temperature data is conducted to compare the performance of Box-Jenkins and GBM methods in monitoring autocorrelation process. Both methods give the same results in terms of model accuracy and monitoring process control. Yet, GBM is superior compared to Box-Jenkins method due to its simplicity and practically with shorter computational time.

  19. Mobius Assembly: A versatile Golden-Gate framework towards universal DNA assembly.

    PubMed

    Andreou, Andreas I; Nakayama, Naomi

    2018-01-01

    Synthetic biology builds upon the foundation of engineering principles, prompting innovation and improvement in biotechnology via a design-build-test-learn cycle. A community-wide standard in DNA assembly would enable bio-molecular engineering at the levels of predictivity and universality in design and construction that are comparable to other engineering fields. Golden Gate Assembly technology, with its robust capability to unidirectionally assemble numerous DNA fragments in a one-tube reaction, has the potential to deliver a universal standard framework for DNA assembly. While current Golden Gate Assembly frameworks (e.g. MoClo and Golden Braid) render either high cloning capacity or vector toolkit simplicity, the technology can be made more versatile-simple, streamlined, and cost/labor-efficient, without compromising capacity. Here we report the development of a new Golden Gate Assembly framework named Mobius Assembly, which combines vector toolkit simplicity with high cloning capacity. It is based on a two-level, hierarchical approach and utilizes a low-frequency cutter to reduce domestication requirements. Mobius Assembly embraces the standard overhang designs designated by MoClo, Golden Braid, and Phytobricks and is largely compatible with already available Golden Gate part libraries. In addition, dropout cassettes encoding chromogenic proteins were implemented for cost-free visible cloning screening that color-code different cloning levels. As proofs of concept, we have successfully assembled up to 16 transcriptional units of various pigmentation genes in both operon and multigene arrangements. Taken together, Mobius Assembly delivers enhanced versatility and efficiency in DNA assembly, facilitating improved standardization and automation.

  20. A Survey of the Isentropic Euler Vortex Problem Using High-Order Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    The flux reconstruction (FR) method offers a simple, efficient, and easy to implement method, and it has been shown to equate to a differential approach to discontinuous Galerkin (DG) methods. The FR method is also accurate to an arbitrary order and the isentropic Euler vortex problem is used here to empirically verify this claim. This problem is widely used in computational fluid dynamics (CFD) to verify the accuracy of a given numerical method due to its simplicity and known exact solution at any given time. While verifying our FR solver, multiple obstacles emerged that prevented us from achieving the expected order of accuracy over short and long amounts of simulation time. It was found that these complications stemmed from a few overlooked details in the original problem definition combined with the FR and DG methods achieving high-accuracy with minimal dissipation. This paper is intended to consolidate the many versions of the vortex problem found in literature and to highlight some of the consequences if these overlooked details remain neglected.

  1. Interaction of part-through cracks in a flat plate

    NASA Technical Reports Server (NTRS)

    Aksel, B.; Erdogan, F.

    1985-01-01

    The accuracy of the line spring model is determined. The effect of interaction between two and three cracks is investigated, and extensive numerical results which may be useful in applications are provided. Line spring model with Reissner's plate theory is formulated to be used for any number and configurations of cracks provided that there is symmetry. This model is used to find stress intensity factors for elliptic internal cracks, elliptic edge cracks and two opposite elliptic edge cracks. Despite the simplicity of the line spring model, the results are found to be close.

  2. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  3. CubeSat Remote Sensing: A Survey of Current Capabilities

    NASA Astrophysics Data System (ADS)

    Hegel, D.

    2014-12-01

    Recent years have seen dramatic growth in the availability and capability of very small satellites for atmospheric sensing, and other space-based science, as the simplicity of integration and low cost of these platforms enables projects that would otherwise be prohibitively expensive, or demand excessive expertise/infrastructure to execute. This paper surveys the current state-of-the-art for CubeSat performance, including pointing accuracy, geolocation, available power, and data downlink capacity. Applications for up-coming missions, such as CeREs, MinXSS, and HARP will also be discussed.

  4. Model-order reduction of lumped parameter systems via fractional calculus

    NASA Astrophysics Data System (ADS)

    Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio

    2018-04-01

    This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.

  5. Self-Extinguishing Lithium Ion Batteries Based on Internally Embedded Fire-Extinguishing Microcapsules with Temperature-Responsiveness.

    PubMed

    Yim, Taeeun; Park, Min-Sik; Woo, Sang-Gil; Kwon, Hyuk-Kwon; Yoo, Jung-Keun; Jung, Yeon Sik; Kim, Ki Jae; Yu, Ji-Sang; Kim, Young-Jun

    2015-08-12

    User safety is one of the most critical issues for the successful implementation of lithium ion batteries (LIBs) in electric vehicles and their further expansion in large-scale energy storage systems. Herein, we propose a novel approach to realize self-extinguishing capability of LIBs for effective safety improvement by integrating temperature-responsive microcapsules containing a fire-extinguishing agent. The microcapsules are designed to release an extinguisher agent upon increased internal temperature of an LIB, resulting in rapid heat absorption through an in situ endothermic reaction and suppression of further temperature rise and undesirable thermal runaway. In a standard nail penetration test, the temperature rise is reduced by 74% without compromising electrochemical performances. It is anticipated that on the strengths of excellent scalability, simplicity, and cost-effectiveness, this novel strategy can be extensively applied to various high energy-density devices to ensure human safety.

  6. Design flood hydrograph estimation procedure for small and fully-ungauged basins

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Petroselli, A.

    2013-12-01

    The Rational Formula is the most applied equation in practical hydrology due to its simplicity and the effective compromise between theory and data availability. Although the Rational Formula is affected by several drawbacks, it is reliable and surprisingly accurate considering the paucity of input information. However, after more than a century, the recent computational, theoretical, and large-scale monitoring progresses compel us to try to suggest a more advanced yet still empirical procedure for estimating peak discharge in small and ungauged basins. In this contribution an alternative empirical procedure (named EBA4SUB - Event Based Approach for Small and Ungauged Basins) based on the common modelling steps: design hyetograph, rainfall excess, and rainfall-runoff transformation, is described. The proposed approach, accurately adapted for the fully-ungauged basin condition, provides a potentially better estimation of the peak discharge, a design hydrograph shape, and, most importantly, reduces the subjectivity of the hydrologist in its application.

  7. The MasPar MP-1 As a Computer Arithmetic Laboratory

    PubMed Central

    Anuta, Michael A.; Lozier, Daniel W.; Turner, Peter R.

    1996-01-01

    This paper is a blueprint for the use of a massively parallel SIMD computer architecture for the simulation of various forms of computer arithmetic. The particular system used is a DEC/MasPar MP-1 with 4096 processors in a square array. This architecture has many advantages for such simulations due largely to the simplicity of the individual processors. Arithmetic operations can be spread across the processor array to simulate a hardware chip. Alternatively they may be performed on individual processors to allow simulation of a massively parallel implementation of the arithmetic. Compromises between these extremes permit speed-area tradeoffs to be examined. The paper includes a description of the architecture and its features. It then summarizes some of the arithmetic systems which have been, or are to be, implemented. The implementation of the level-index and symmetric level-index, LI and SLI, systems is described in some detail. An extensive bibliography is included. PMID:27805123

  8. Neural networks to classify speaker independent isolated words recorded in radio car environments

    NASA Astrophysics Data System (ADS)

    Alippi, C.; Simeoni, M.; Torri, V.

    1993-02-01

    Many applications, in particular the ones requiring nonlinear signal processing, have proved Artificial Neural Networks (ANN's) to be invaluable tools for model free estimation. The classifying abilities of ANN's are addressed by testing their performance in a speaker independent word recognition application. A real world case requiring implementation of compact integrated devices is taken into account: the classification of isolated words in radio car environment. A multispeaker database of isolated words was recorded in different environments. Data were first processed to determinate the boundaries of each word and then to extract speech features, the latter accomplished by using cepstral coefficient representation, log area ratios and filters bank techniques. Multilayered perceptron and adaptive vector quantization neural paradigms were tested to find a reasonable compromise between performances and network simplicity, fundamental requirement for the implementation of compact real time running neural devices.

  9. Scientific papers for health informatics.

    PubMed

    Pereira, Samáris Ramiro; Duarte, Jacy Marcondes; Bandiera-Paiva, Paulo

    2013-01-01

    From the hypothesis that the development of scientific papers, mainly in interdisciplinary areas such as Health Informatics, may bring difficulties to the author, as had its communicative efficacy decreased or compromising their approval for publication; we aim to make considerations on the main items to good players making this kind of text. The scientific writing has peculiarities that must be taken into consideration when it writes: general characteristics, such as simplicity and objectivity, and characteristics of each area of knowledge, such as terminology, formatting and standardization. The research methodology adopted is bibliographical. The information was based on literature review and the authors' experience, teachers and assessors of scientific methodology in peer review publications in the area. As a result, we designed a checklist of items to be checked before submission of a paper to a scientific publication vehicle in order to contribute to the promotion of research, facilitating the publication and increase its capacity in this important area of knowledge.

  10. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm.

    PubMed

    Lee, Jae-Hong; Kim, Do-Hyung; Jeong, Seong-Nyum; Choi, Seong-Ho

    2018-04-01

    The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.

  11. The ambiguity of simplicity in quantum and classical simulation

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Mahoney, John R.; Crutchfield, James P.

    2017-04-01

    A system's perceived simplicity depends on whether it is represented classically or quantally. This is not so surprising, as classical and quantum physics are descriptive frameworks built on different assumptions that capture, emphasize, and express different properties and mechanisms. What is surprising is that, as we demonstrate, simplicity is ambiguous: the relative simplicity between two systems can change sign when moving between classical and quantum descriptions. Here, we associate simplicity with small model-memory. We see that the notions of absolute physical simplicity at best form a partial, not a total, order. This suggests that appeals to principles of physical simplicity, via Ockham's Razor or to the ;elegance; of competing theories, may be fundamentally subjective. Recent rapid progress in quantum computation and quantum simulation suggest that the ambiguity of simplicity will strongly impact statistical inference and, in particular, model selection.

  12. Combating QR-Code-Based Compromised Accounts in Mobile Social Networks.

    PubMed

    Guo, Dong; Cao, Jian; Wang, Xiaoqi; Fu, Qiang; Li, Qiang

    2016-09-20

    Cyber Physical Social Sensing makes mobile social networks (MSNs) popular with users. However, such attacks are rampant as malicious URLs are spread covertly through quick response (QR) codes to control compromised accounts in MSNs to propagate malicious messages. Currently, there are generally two types of methods to identify compromised accounts in MSNs: one type is to analyze the potential threats on wireless access points and the potential threats on handheld devices' operation systems so as to stop compromised accounts from spreading malicious messages; the other type is to apply the method of detecting compromised accounts in online social networks to MSNs. The above types of methods above focus neither on the problems of MSNs themselves nor on the interaction of sensors' messages, which leads to the restrictiveness of platforms and the simplification of methods. In order to stop the spreading of compromised accounts in MSNs effectively, the attacks have to be traced to their sources first. Through sensors, users exchange information in MSNs and acquire information by scanning QR codes. Therefore, analyzing the traces of sensor-related information helps to identify the compromised accounts in MSNs. This paper analyzes the diversity of information sending modes of compromised accounts and normal accounts, analyzes the regularity of GPS (Global Positioning System)-based location information, and introduces the concepts of entropy and conditional entropy so as to construct an entropy-based model based on machine learning strategies. To achieve the goal, about 500,000 accounts of Sina Weibo and about 100 million corresponding messages are collected. Through the validation, the accuracy rate of the model is proved to be as high as 87.6%, and the false positive rate is only 3.7%. Meanwhile, the comparative experiments of the feature sets prove that sensor-based location information can be applied to detect the compromised accounts in MSNs.

  13. Combating QR-Code-Based Compromised Accounts in Mobile Social Networks

    PubMed Central

    Guo, Dong; Cao, Jian; Wang, Xiaoqi; Fu, Qiang; Li, Qiang

    2016-01-01

    Cyber Physical Social Sensing makes mobile social networks (MSNs) popular with users. However, such attacks are rampant as malicious URLs are spread covertly through quick response (QR) codes to control compromised accounts in MSNs to propagate malicious messages. Currently, there are generally two types of methods to identify compromised accounts in MSNs: one type is to analyze the potential threats on wireless access points and the potential threats on handheld devices’ operation systems so as to stop compromised accounts from spreading malicious messages; the other type is to apply the method of detecting compromised accounts in online social networks to MSNs. The above types of methods above focus neither on the problems of MSNs themselves nor on the interaction of sensors’ messages, which leads to the restrictiveness of platforms and the simplification of methods. In order to stop the spreading of compromised accounts in MSNs effectively, the attacks have to be traced to their sources first. Through sensors, users exchange information in MSNs and acquire information by scanning QR codes. Therefore, analyzing the traces of sensor-related information helps to identify the compromised accounts in MSNs. This paper analyzes the diversity of information sending modes of compromised accounts and normal accounts, analyzes the regularity of GPS (Global Positioning System)-based location information, and introduces the concepts of entropy and conditional entropy so as to construct an entropy-based model based on machine learning strategies. To achieve the goal, about 500,000 accounts of Sina Weibo and about 100 million corresponding messages are collected. Through the validation, the accuracy rate of the model is proved to be as high as 87.6%, and the false positive rate is only 3.7%. Meanwhile, the comparative experiments of the feature sets prove that sensor-based location information can be applied to detect the compromised accounts in MSNs. PMID:27657071

  14. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  15. Mobius Assembly: A versatile Golden-Gate framework towards universal DNA assembly

    PubMed Central

    Andreou, Andreas I.

    2018-01-01

    Synthetic biology builds upon the foundation of engineering principles, prompting innovation and improvement in biotechnology via a design-build-test-learn cycle. A community-wide standard in DNA assembly would enable bio-molecular engineering at the levels of predictivity and universality in design and construction that are comparable to other engineering fields. Golden Gate Assembly technology, with its robust capability to unidirectionally assemble numerous DNA fragments in a one-tube reaction, has the potential to deliver a universal standard framework for DNA assembly. While current Golden Gate Assembly frameworks (e.g. MoClo and Golden Braid) render either high cloning capacity or vector toolkit simplicity, the technology can be made more versatile—simple, streamlined, and cost/labor-efficient, without compromising capacity. Here we report the development of a new Golden Gate Assembly framework named Mobius Assembly, which combines vector toolkit simplicity with high cloning capacity. It is based on a two-level, hierarchical approach and utilizes a low-frequency cutter to reduce domestication requirements. Mobius Assembly embraces the standard overhang designs designated by MoClo, Golden Braid, and Phytobricks and is largely compatible with already available Golden Gate part libraries. In addition, dropout cassettes encoding chromogenic proteins were implemented for cost-free visible cloning screening that color-code different cloning levels. As proofs of concept, we have successfully assembled up to 16 transcriptional units of various pigmentation genes in both operon and multigene arrangements. Taken together, Mobius Assembly delivers enhanced versatility and efficiency in DNA assembly, facilitating improved standardization and automation. PMID:29293531

  16. A systematic mapping study of process mining

    NASA Astrophysics Data System (ADS)

    Maita, Ana Rocío Cárdenas; Martins, Lucas Corrêa; López Paz, Carlos Ramón; Rafferty, Laura; Hung, Patrick C. K.; Peres, Sarajane Marques; Fantinato, Marcelo

    2018-05-01

    This study systematically assesses the process mining scenario from 2005 to 2014. The analysis of 705 papers evidenced 'discovery' (71%) as the main type of process mining addressed and 'categorical prediction' (25%) as the main mining task solved. The most applied traditional technique is the 'graph structure-based' ones (38%). Specifically concerning computational intelligence and machine learning techniques, we concluded that little relevance has been given to them. The most applied are 'evolutionary computation' (9%) and 'decision tree' (6%), respectively. Process mining challenges, such as balancing among robustness, simplicity, accuracy and generalization, could benefit from a larger use of such techniques.

  17. Particle trapping and manipulation using hollow beam with tunable size generated by thermal nonlinear optical effect

    NASA Astrophysics Data System (ADS)

    He, Bo; Cheng, Xuemei; Zhang, Hui; Chen, Haowei; Zhang, Qian; Ren, Zhaoyu; Ding, Shan; Bai, Jintao

    2018-05-01

    We report micron-sized particle trapping and manipulation using a hollow beam of tunable size, which was generated by cross-phase modulation via the thermal nonlinear optical effect in an ethanol medium. The results demonstrated that the particle can be trapped stably in air for hours and manipulated in millimeter range with micrometer-level accuracy by modulating the size of the hollow beam. The merits of flexibility in tuning the beam size and simplicity in operation give this method great potential for the in situ study of individual particles in air.

  18. Reducing Bolt Preload Variation with Angle-of-Twist Bolt Loading

    NASA Technical Reports Server (NTRS)

    Thompson, Bryce; Nayate, Pramod; Smith, Doug; McCool, Alex (Technical Monitor)

    2001-01-01

    Critical high-pressure sealing joints on the Space Shuttle reusable solid rocket motor require precise control of bolt preload to ensure proper joint function. As the reusable solid rocket motor experiences rapid internal pressurization, correct bolt preloads maintain the sealing capability and structural integrity of the hardware. The angle-of-twist process provides the right combination of preload accuracy, reliability, process control, and assembly-friendly design. It improves significantly over previous methods. The sophisticated angle-of-twist process controls have yielded answers to all discrepancies encountered while the simplicity of the root process has assured joint preload reliability.

  19. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  20. A computer system for analysis and transmission of spirometry waveforms using volume sampling.

    PubMed

    Ostler, D V; Gardner, R M; Crapo, R O

    1984-06-01

    A microprocessor-controlled data gathering system for telemetry and analysis of spirometry waveforms was implemented using a completely digital design. Spirometry waveforms were obtained from an optical shaft encoder attached to a rolling seal spirometer. Time intervals between 10-ml volume changes (volume sampling) were stored. The digital design eliminated problems of analog signal sampling. The system measured flows up to 12 liters/sec with 5% accuracy and volumes up to 10 liters with 1% accuracy. Transmission of 10 waveforms took about 3 min. Error detection assured that no data were lost or distorted during transmission. A pulmonary physician at the central hospital reviewed the volume-time and flow-volume waveforms and interpretations generated by the central computer before forwarding the results and consulting with the rural physician. This system is suitable for use in a major hospital, rural hospital, or small clinic because of the system's simplicity and small size.

  1. A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2009-01-01

    We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.

  2. Digital detection of endonuclease mediated gene disruption in the HIV provirus

    PubMed Central

    Sedlak, Ruth Hall; Liang, Shu; Niyonzima, Nixon; De Silva Feelixge, Harshana S.; Roychoudhury, Pavitra; Greninger, Alexander L.; Weber, Nicholas D.; Boissel, Sandrine; Scharenberg, Andrew M.; Cheng, Anqi; Magaret, Amalia; Bumgarner, Roger; Stone, Daniel; Jerome, Keith R.

    2016-01-01

    Genome editing by designer nucleases is a rapidly evolving technology utilized in a highly diverse set of research fields. Among all fields, the T7 endonuclease mismatch cleavage assay, or Surveyor assay, is the most commonly used tool to assess genomic editing by designer nucleases. This assay, while relatively easy to perform, provides only a semi-quantitative measure of mutation efficiency that lacks sensitivity and accuracy. We demonstrate a simple droplet digital PCR assay that quickly quantitates a range of indel mutations with detection as low as 0.02% mutant in a wild type background and precision (≤6%CV) and accuracy superior to either mismatch cleavage assay or clonal sequencing when compared to next-generation sequencing. The precision and simplicity of this assay will facilitate comparison of gene editing approaches and their optimization, accelerating progress in this rapidly-moving field. PMID:26829887

  3. Economic Analysis in the Pacific Northwest Land Resources Project: Theoretical Considerations and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Morse, D. R. A.; Sahlberg, J. T.

    1977-01-01

    The Pacific Northwest Land Resources Inventory Demonstration Project i s an a ttempt to combine a whole spectrum of heterogeneous geographic, institutional and applications elements in a synergistic approach to the evaluation of remote sensing techniques. This diversity is the prime motivating factor behind a theoretical investigation of alternative economic analysis procedures. For a multitude of reasons--simplicity, ease of understanding, financial constraints and credibility, among others--cost-effectiveness emerges as the most practical tool for conducting such evaluation determinatIons in the Pacific Northwest. Preliminary findings in two water resource application areas suggest, in conformity with most published studies, that Lands at-aided data collection methods enjoy substantial cost advantages over alternative techniques. The pntential for sensitivity analysis based on cost/accuracy tradeoffs is considered on a theoretical plane in the absence of current accuracy figures concerning the Landsat-aided approach.

  4. Anatomic tibial component design can increase tibial coverage and rotational alignment accuracy: a comparison of six contemporary designs.

    PubMed

    Dai, Yifei; Scuderi, Giles R; Bischoff, Jeffrey E; Bertin, Kim; Tarabichi, Samih; Rajgopal, Ashok

    2014-12-01

    The aim of this study was to comprehensively evaluate contemporary tibial component designs against global tibial anatomy. We hypothesized that anatomically designed tibial components offer increased morphological fit to the resected proximal tibia with increased alignment accuracy compared to symmetric and asymmetric designs. Using a multi-ethnic bone dataset, six contemporary tibial component designs were investigated, including anatomic, asymmetric, and symmetric design types. Investigations included (1) measurement of component conformity to the resected tibia using a comprehensive set of size and shape metrics; (2) assessment of component coverage on the resected tibia while ensuring clinically acceptable levels of rotation and overhang; and (3) evaluation of the incidence and severity of component downsizing due to adherence to rotational alignment and overhang requirements, and the associated compromise in tibial coverage. Differences in coverage were statistically compared across designs and ethnicities, as well as between placements with or without enforcement of proper rotational alignment. Compared to non-anatomic designs investigated, the anatomic design exhibited better conformity to resected tibial morphology in size and shape, higher tibial coverage (92% compared to 85-87%), more cortical support (posteromedial region), lower incidence of downsizing (3% compared to 39-60%), and less compromise of tibial coverage (0.5% compared to 4-6%) when enforcing proper rotational alignment. The anatomic design demonstrated meaningful increase in tibial coverage with accurate rotational alignment compared to symmetric and asymmetric designs, suggesting its potential for less intra-operative compromises and improved performance. III.

  5. Mining Roles and Access Control for Relational Data under Privacy and Accuracy Constraints

    ERIC Educational Resources Information Center

    Pervaiz, Zahid

    2013-01-01

    Access control mechanisms protect sensitive information from unauthorized users. However, when sensitive information is shared and a Privacy Protection Mechanism (PPM) is not in place, an authorized insider can still compromise the privacy of a person leading to identity disclosure. A PPM can use suppression and generalization to anonymize and…

  6. The Influence of Word Frequency on Word Retrieval: Measuring Covert Behaviors

    ERIC Educational Resources Information Center

    Chih, Yu-Chun; Stierwalt, Julie A. G.; LaPointe, Leonard L.; Chih, Yu-Pin

    2017-01-01

    Physiological activities (heart rate and respiratory rate) during a word retrieval task were measured in normal participants. Word frequency demonstrated a significant effect on naming accuracy and latencies but not on physiological activities. These data will serve as a basis for comparison for individuals with a compromised language system.

  7. Georeferencing in Gnss-Challenged Environment: Integrating Uwb and Imu Technologies

    NASA Astrophysics Data System (ADS)

    Toth, C. K.; Koppanyi, Z.; Navratil, V.; Grejner-Brzezinska, D.

    2017-05-01

    Acquiring geospatial data in GNSS compromised environments remains a problem in mapping and positioning in general. Urban canyons, heavily vegetated areas, indoor environments represent different levels of GNSS signal availability from weak to no signal reception. Even outdoors, with multiple GNSS systems, with an ever-increasing number of satellites, there are many situations with limited or no access to GNSS signals. Independent navigation sensors, such as IMU can provide high-data rate information but their initial accuracy degrades quickly, as the measurement data drift over time unless positioning fixes are provided from another source. At The Ohio State University's Satellite Positioning and Inertial Navigation (SPIN) Laboratory, as one feasible solution, Ultra- Wideband (UWB) radio units are used to aid positioning and navigating in GNSS compromised environments, including indoor and outdoor scenarios. Here we report about experiences obtained with georeferencing a pushcart based sensor system under canopied areas. The positioning system is based on UWB and IMU sensor integration, and provides sensor platform orientation for an electromagnetic inference (EMI) sensor. Performance evaluation results are provided for various test scenarios, confirming acceptable results for applications where high accuracy is not required.

  8. Determining successional stage of temperate coniferous forests with Landsat satellite data

    NASA Technical Reports Server (NTRS)

    Fiorella, Maria; Ripple, William J.

    1995-01-01

    Thematic Mapper (TM) digital imagery was used to map forest successional stages and to evaluate spectral differences between old-growth and mature forests in the central Cascade Range of Oregon. Relative sun incidence values were incorporated into the successional stage classification to compensate for topographic induced variation. Relative sun incidence improved the classification accuracy of young successional stages, but did not improve the classification accuracy of older, closed canopy forest classes or overall accuracy. TM bands 1, 2, and 4; the normalized difference vegetation index (NDVI); and TM 4/3, 4/5, and 4/7 band ratio values for old-growth forests were found to be significantly lower than the values of mature forests (P less than or equal to 0.010). Wetness and the TM 4/5 and 4/7 band ratios all had low correlations to relative sun incidence (r(exp 2) less than or equal to 0.16). The TM 4/5 band ratio was named the 'structural index' (SI) because of its ability to distinguish between mature and old-growth forests and its simplicity.

  9. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  10. Do not fear your opponent: suboptimal changes of a prevention strategy when facing stronger opponents.

    PubMed

    Slezak, Diego Fernandez; Sigman, Mariano

    2012-08-01

    The time spent making a decision and its quality define a widely studied trade-off. Some models suggest that the time spent is set to optimize reward, as verified empirically in simple-decision making experiments. However, in a more complex perspective compromising components of regulation focus, ambitions, fear, risk and social variables, adjustment of the speed-accuracy trade-off may not be optimal. Specifically, regulatory focus theory shows that people can be set in a promotion mode, where focus is on seeking to approach a desired state (to win), or in a prevention mode, focusing to avoid undesired states (not to lose). In promotion, people are eager to take risks increasing speed and decreasing accuracy. In prevention, strategic vigilance increases, decreasing speed and improving accuracy. When time and accuracy have to be compromised, one can ask which of these 2 strategies optimizes reward, leading to optimal performance. This is investigated here in a unique experimental environment. Decision making is studied in rapid-chess (180 s per game), in which the goal of a player is to mate the opponent in a finite amount of time or, alternatively, time-out of the opponent with sufficient material to mate. In different games, players face strong and weak opponents. It was observed that (a) players adopt a more conservative strategy when facing strong opponents, with slower and more accurate moves, and (b) this strategy is suboptimal: Players increase their winning likelihood against strong opponents using the policy they adopt when confronting opponents with similar strength. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  11. General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles

    NASA Astrophysics Data System (ADS)

    Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J.

    2017-09-01

    The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

  12. New approach to analyzing soil-building systems

    USGS Publications Warehouse

    Safak, E.

    1998-01-01

    A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.

  13. Complicated Simplicity: Moral Identity Formation and Social Movement Learning in the Voluntary Simplicity Movement

    ERIC Educational Resources Information Center

    Sandlin, Jennifer A.; Walther, Carol S.

    2009-01-01

    This article examines the learning occurring within the voluntary simplicity social movement, focusing specifically on the learning and development of identity via "moral agency" in those individuals who embrace and practice voluntary simplicity. Four key findings are discussed. First, simplifiers craft new identities in a consumption-driven world…

  14. Phase noise cancellation in polarisation-maintaining fibre links

    NASA Astrophysics Data System (ADS)

    Rauf, B.; Vélez López, M. C.; Thoumany, P.; Pizzocaro, M.; Calonico, D.

    2018-03-01

    The distribution of ultra-narrow linewidth laser radiation is an integral part of many challenging metrological applications. Changes in the optical pathlength induced by environmental disturbances compromise the stability and accuracy of optical fibre networks distributing the laser light and call for active phase noise cancellation. Here we present a laboratory scale optical (at 578 nm) fibre network featuring all polarisation maintaining fibres in a setup with low optical powers available and tracking voltage-controlled oscillators implemented. The stability and accuracy of this system reach performance levels below 1 × 10-19 after 10 000 s of averaging.

  15. Reading in Examination-Type Situations: The Effects of Text Layout on Performance

    ERIC Educational Resources Information Center

    Lonsdale, Maria dos Santos; Dyson, Mary C.; Reynolds, Linda

    2006-01-01

    Examinations are conventionally used to measure candidates' achievement in a limited time period. However, the influence of text layout on performance may compromise the construct validity of the examination. An experimental study looked at the effects of the text layout on the speed and accuracy of a reading task in an examination-type situation.…

  16. Data Integrity: Why Aren't the Data Accurate? AIR 1989 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Gose, Frank J.

    The accuracy and reliability aspects of data integrity are discussed, with an emphasis on the need for consistency in responsibility and authority. A variety of ways in which data integrity can be compromised are discussed. The following sources of data corruption are described, and the ease or difficulty of identification and suggested actions…

  17. A cognitive information processing framework for distributed sensor networks

    NASA Astrophysics Data System (ADS)

    Wang, Feiyi; Qi, Hairong

    2004-09-01

    In this paper, we present a cognitive agent framework (CAF) based on swarm intelligence and self-organization principles, and demonstrate it through collaborative processing for target classification in sensor networks. The framework involves integrated designs to provide both cognitive behavior at the organization level to conquer complexity and reactive behavior at the individual agent level to retain simplicity. The design tackles various problems in the current information processing systems, including overly complex systems, maintenance difficulties, increasing vulnerability to attack, lack of capability to tolerate faults, and inability to identify and cope with low-frequency patterns. An important and distinguishing point of the presented work from classical AI research is that the acquired intelligence does not pertain to distinct individuals but to groups. It also deviates from multi-agent systems (MAS) due to sheer quantity of extremely simple agents we are able to accommodate, to the degree that some loss of coordination messages and behavior of faulty/compromised agents will not affect the collective decision made by the group.

  18. (Non-) robustness of vulnerability assessments to climate change: An application to New Zealand.

    PubMed

    Fernandez, Mario Andres; Bucaram, Santiago; Renteria, Willington

    2017-12-01

    Assessments of vulnerability to climate change are a key element to inform climate policy and research. Assessments based on the aggregation of indicators have a strong appeal for their simplicity but are at risk of over-simplification and uncertainty. This paper explores the non-robustness of indicators-based assessments to changes in assumptions on the degree of substitution or compensation between indicators. Our case study is a nationwide assessment for New Zealand. We found that the ranking of geographic areas is sensitive to different parameterisations of the aggregation function, that is, areas that are categorised as highly vulnerable may switch to the least vulnerable category even with respect to the same climate hazards and population groups. Policy implications from the assessments are then compromised. Though indicators-based approaches may help on identifying drivers of vulnerability, there are weak grounds to use them to recommend mitigation or adaptation decisions given the high level of uncertainty because of non-robustness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Zebrafish and relational memory: Could a simple fish be useful for the analysis of biological mechanisms of complex vertebrate learning?

    PubMed

    Gerlai, Robert

    2017-08-01

    Analysis of the zebrafish allows one to combine two distinct scientific approaches, comparative ethology and neurobehavioral genetics. Furthermore, this species arguably represents an optimal compromise between system complexity and practical simplicity. This mini-review focuses on a complex form of learning, relational learning and memory, in zebrafish. It argues that zebrafish are capable of this type of learning, and it attempts to show how this species may be useful in the analysis of the mechanisms and the evolution of this complex brain function. The review is not intended to be comprehensive. It is a short opinion piece that reflects the author's own biases, and it draws some of its examples from the work coming from his own laboratory. Nevertheless, it is written in the hope that it will persuade those who have not utilized zebrafish and who may be interested in opening their research horizon to this relatively novel but powerful vertebrate research tool. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A blood circulation model for reference man

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leggett, R.W.; Eckerman, K.F.; Williams, L.R.

    This paper describes a dynamic blood circulation model that predicts the movement and gradual dispersal of a bolus of material in the circulation after its intravascular injection into an adult human. The main purpose of the model is to improve the dosimetry of internally deposited radionuclides that decay in the circulation to a significant extent. The total blood volume is partitioned into the blood contents of 24 separate organs or tissues, right heart chambers, left heart chambers, pulmonary circulation, arterial outflow to the systemic tissues (aorta and large arteries), and venous return from the systemic tissues (large veins). As amore » compromise between physical reality and computational simplicity, the circulation of blood is viewed as a system of first-order transfers between blood pools, with the delay time depending on the mean transit time across the pool. The model allows consideration of incomplete, tissue-dependent extraction of material during passage through the circulation and return of material from tissues to plasma.« less

  1. Coupled transverse and torsional vibrations in a mechanical system with two identical beams

    NASA Astrophysics Data System (ADS)

    Vlase, S.; Marin, M.; Scutaru, M. L.; Munteanu, R.

    2017-06-01

    The paper aims to study a plane system with bars, with certain symmetries. Such problems can be encountered frequently in industry and civil engineering. Considerations related to the economy of the design process, constructive simplicity, cost and logistics make the use of identical parts a frequent procedure. The paper aims to determine the properties of the eigenvalues and eigenmodes for transverse and torsional vibrations of a mechanical system where two of the three component bars are identical. The determination of these properties allows the calculus effort and the computation time and thus increases the accuracy of the results in such matters.

  2. Two-dimensional photoacoustic imaging of femtosecond filament in water

    NASA Astrophysics Data System (ADS)

    Potemkin, F. V.; Mareev, E. I.; Rumiantsev, B. V.; Bychkov, A. S.; Karabutov, A. A.; Cherepetskaya, E. B.; Makarov, V. A.

    2018-07-01

    We report a first-of-its-kind optoacoustic tomography of a femtosecond filament in water. Using a broadband (~100 MHz) piezoelectric transducer and a back-projection reconstruction technique, a single filament profile was retrieved. Obtained pressure distribution induced by the femtosecond filament allowed us to identify the size of the core and the energy reservoir with spatial resolution better than 10 µm. The photoacoustic imaging provides direct measurements of the energy deposition into the medium under filamentation of ultrashort laser pulses that cannot be obtained by existing techniques. In combination with a relative simplicity and high accuracy, photoacoustic imaging can be considered as a breakthrough instrument for filamentation investigation.

  3. An in-depth review of photovoltaic system performance models

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. R.

    1984-01-01

    The features, strong points and shortcomings of 10 numerical models commonly applied to assessing photovoltaic performance are discussed. The models range in capabilities from first-order approximations to full circuit level descriptions. Account is taken, at times, of the cell and module characteristics, the orientation and geometry, array-level factors, the power-conditioning equipment, the overall plant performance, O and M effects, and site-specific factors. Areas of improvement and/or necessary extensions are identified for several of the models. Although the simplicity of a model was found not necessarily to affect the accuracy of the data generated, the use of any one model was dependent on the application.

  4. Quantitative analysis of urea in human urine and serum by 1H nuclear magnetic resonance†

    PubMed Central

    Liu, Lingyan; Mo, Huaping; Wei, Siwei

    2016-01-01

    A convenient and fast method for quantifying urea in biofluids is demonstrated using NMR analysis and the solvent water signal as a concentration reference. The urea concentration can be accurately determined with errors less than 3% between 1 mM and 50 mM, and less than 2% above 50 mM in urine and serum. The method is promising for various applications with advantages of simplicity, high accuracy, and fast non-destructive detection. With an ability to measure other metabolites simultaneously, this NMR method is also likely to find applications in metabolic profiling and system biology. PMID:22179722

  5. Differential focal and nonfocal prospective memory accuracy in a demographically diverse group of nondemented community-dwelling older adults.

    PubMed

    Chi, Susan Y; Rabin, Laura A; Aronov, Avner; Fogel, Joshua; Kapoor, Ashu; Wang, Cuiling

    2014-11-01

    Although prospective memory (PM) is compromised in mild cognitive impairment (MCI), it is unclear which specific cognitive processes underlie these PM difficulties. We investigated older adults' performance on a computerized event-based focal versus nonfocal PM task that made varying demands on the amount of attentional control required to support intention retrieval. Participants were nondemented individuals (mean age=81.8 years; female=66.1%) enrolled in a community-based longitudinal study, including those with amnestic MCI (aMCI), nonamnestic MCI (naMCI), subjective cognitive decline (SCD), and healthy controls (HC). Participants included in the primary analysis (n=189) completed the PM task and recalled and/or recognized both focal and nonfocal PM cues presented in the task. Participants and their informants also completed a questionnaire assessing everyday PM failures. Relative to HC, those with aMCI and naMCI were significantly impaired in focal PM accuracy (p<.05). In a follow-up analysis that included 13 additional participants who successfully recalled and/or recognized at least one of the two PM cues, the naMCI group showed deficits in nonfocal PM accuracy (p<.05). There was a significant negative correlation between informant reports of PM difficulties and nonfocal PM accuracy (p<.01). PM failures in aMCI may be primarily related to impairment of spontaneous retrieval processes associated with the medial temporal lobe system, while PM failures in naMCI potentially indicate additional deficits in executive control functions and prefrontal systems. The observed focal versus nonfocal PM performance profiles in aMCI and naMCI may constitute specific behavioral markers of PM decline that result from compromise of separate neurocognitive systems.

  6. Analysis and trade-off studies of large lightweight mirror structures. [large space telescope

    NASA Technical Reports Server (NTRS)

    Soosaar, K.; Grin, R.; Ayer, F.

    1975-01-01

    A candidate mirror, hexagonally lightweighted, is analyzed under various loadings using as complete a procedure as possible. Successive simplifications are introduced and compared to an original analysis. A model which is a reasonable compromise between accuracy and cost is found and is used for making trade-off studies of the various structural parameters of the lightweighted mirror.

  7. Allocation of Attentional Resources toward a Secondary Cognitive Task Leads to Compromised Ankle Proprioceptive Performance in Healthy Young Adults

    PubMed Central

    Yasuda, Kazuhiro; Iimura, Naoyuki; Iwata, Hiroyasu

    2014-01-01

    The objective of the present study was to determine whether increased attentional demands influence the assessment of ankle joint proprioceptive ability in young adults. We used a dual-task condition, in which participants performed an ankle ipsilateral position-matching task with and without a secondary serial auditory subtraction task during target angle encoding. Two experiments were performed with two different cohorts: one in which the auditory subtraction task was easy (experiment 1a) and one in which it was difficult (experiment 1b). The results showed that, compared with the single-task condition, participants had higher absolute error under dual-task conditions in experiment 1b. The reduction in position-matching accuracy with an attentionally demanding cognitive task suggests that allocation of attentional resources toward a difficult second task can lead to compromised ankle proprioceptive performance. Therefore, these findings indicate that the difficulty level of the cognitive task might be the possible critical factor that decreased accuracy of position-matching task. We conclude that increased attentional demand with difficult cognitive task does influence the assessment of ankle joint proprioceptive ability in young adults when measured using an ankle ipsilateral position-matching task. PMID:24523966

  8. Global analysis of microscopic fluorescence lifetime images using spectral segmentation and a digital micromirror spatial illuminator.

    PubMed

    Bednarkiewicz, Artur; Whelan, Maurice P

    2008-01-01

    Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements.

  9. Loop quantum gravity simplicity constraint as surface defect in complex Chern-Simons theory

    NASA Astrophysics Data System (ADS)

    Han, Muxin; Huang, Zichang

    2017-05-01

    The simplicity constraint is studied in the context of four-dimensional spinfoam models with a cosmological constant. We find that the quantum simplicity constraint is realized as the two-dimensional surface defect in SL (2 ,C ) Chern-Simons theory in the construction of spinfoam amplitudes. By this realization of the simplicity constraint in Chern-Simons theory, we are able to construct the new spinfoam amplitude with a cosmological constant for an arbitrary simplicial complex (with many 4-simplices). The semiclassical asymptotics of the amplitude is shown to correctly reproduce the four-dimensional Einstein-Regge action with a cosmological constant term.

  10. The outlook for precipitation measurements from space

    NASA Technical Reports Server (NTRS)

    Atlas, D.; Eckerman, J.; Meneghini, R.; Moore, R. K.

    1981-01-01

    To provide useful precipitation measurements from space, two requirements must be met: adequate spatial and temporal sampling of the storm and sufficient accuracy in the estimate of precipitation intensity. Although presently no single instrument or method completely satisfies both requirements, the visible/IR, microwave radiometer and radar methods can be used in a complementary manner. Visible/IR instruments provide good temporal sampling and rain area depiction, but recourse must be made to microwave measurements for quantitative rainfall estimates. The inadequacy of microwave radiometer measurements over land suggests, in turn, the use of radar. Several recently developed attenuating-wavelength radar methods are discussed in terms of their accuracy, dynamic range and system implementation. Traditionally, the requirements of high resolution and adequate dynamic range led to fairly costly and complex radar systems. Some simplications and cost reduction can be made; however, by using K-band wavelengths which have the advantages of greater sensitivity at the low rain rates and higher resolution capabilities. Several recently proposed methods of this kind are reviewed in terms of accuracy and system implementation. Finally, an adaptive-pointing multi-sensor instrument is described that would exploit certain advantages of the IR, radiometric and radar methods.

  11. Foundations of measurement and instrumentation

    NASA Technical Reports Server (NTRS)

    Warshawsky, Isidore

    1990-01-01

    The user of instrumentation has provided an understanding of the factors that influence instrument performance, selection, and application, and of the methods of interpreting and presenting the results of measurements. Such understanding is prerequisite to the successful attainment of the best compromise among reliability, accuracy, speed, cost, and importance of the measurement operation in achieving the ultimate goal of a project. Some subjects covered are dimensions; units; sources of measurement error; methods of describing and estimating accuracy; deduction and presentation of results through empirical equations, including the method of least squares; experimental and analytical methods of determining the static and dynamic behavior of instrumentation systems, including the use of analogs.

  12. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  13. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  14. Optimal Control Method of Robot End Position and Orientation Based on Dynamic Tracking Measurement

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Xu, Lijuan

    2018-01-01

    In order to improve the accuracy of robot pose positioning and control, this paper proposed a dynamic tracking measurement robot pose optimization control method based on the actual measurement of D-H parameters of the robot, the parameters is taken with feedback compensation of the robot, according to the geometrical parameters obtained by robot pose tracking measurement, improved multi sensor information fusion the extended Kalan filter method, with continuous self-optimal regression, using the geometric relationship between joint axes for kinematic parameters in the model, link model parameters obtained can timely feedback to the robot, the implementation of parameter correction and compensation, finally we can get the optimal attitude angle, realize the robot pose optimization control experiments were performed. 6R dynamic tracking control of robot joint robot with independent research and development is taken as experimental subject, the simulation results show that the control method improves robot positioning accuracy, and it has the advantages of versatility, simplicity, ease of operation and so on.

  15. CFD Based Prediction of Discharge Coefficient of Sonic Nozzle with Surface Roughness

    NASA Astrophysics Data System (ADS)

    Bagaskara, Agastya; Agoes Moelyadi, Mochammad

    2018-04-01

    Due to its simplicity and accuracy, sonic nozzle is widely used in gas flow measurement, gas flow meter calibration standard, and flow control. The nozzle obtains mass flow rate by measuring temperature and pressure in the inlet during choked flow condition and calculate the flow rate using the one-dimensional isentropic flow equation multiplied by a discharge coefficient, which takes into account multiple non-isentropic effects, which causes the reduction in mass flow. Proper determination of discharge coefficient is crucial to ensure the accuracy of mass flow measurement by the nozzle. Available analytical solution for the prediction of discharge coefficient assumes that the nozzle wall is hydraulically smooth which causes disagreement with experimental results. In this paper, the discharge coefficient of sonic nozzle is determined using computational fluid dynamics method by taking into account the roughness of the wall. It is found that the result shows better agreement with the experiment data compared to the analytical result.

  16. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  17. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  18. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  19. The VLBI time delay function for synchronous orbits

    NASA Technical Reports Server (NTRS)

    Rosenbaum, B.

    1972-01-01

    The VLBI is a satellite tracking technique that to date was applied largely to the tracking of synchronous orbits. These orbits are favorable for VLBI in that the remote satellite range allows continuous viewing from widely separated stations. The primary observable, geometric time delay is the time difference for signal propagation between satellite and baseline terminals. Extraordinary accuracy in angular position data on the satellite can be obtained by observation from baselines of continental dimensions. In satellite tracking though the common objective is to derive orbital elements. A question arises as to how the baseline vector bears on the accuracy of determining the elements. Our approach to this question is to derive an analytic expression for the time delay function in terms of Kepler elements and station coordinates. The analysis, which is for simplicity based on elliptic motion, shows that the resolution for the inclination of the orbital plane depends on the magnitude of the baseline polar component and the resolution for in-plane elements depends on the magnitude of a projected equatorial baseline component.

  20. Rediscovery and Revival of Analytical Refractometry for Protein Determination: Recombining Simplicity With Accuracy in the Digital Era.

    PubMed

    Anderle, Heinz; Weber, Alfred

    2016-03-01

    Among "vintage" methods of protein determination, quantitative analytical refractometry has received far less attention than well-established pharmacopoeial techniques based on protein nitrogen content, such as Dumas combustion (1831) and Kjeldahl digestion (1883). Protein determination by quantitative refractometry dates back to 1903 and has been extensively investigated and characterized in the following 30 years, but has since vanished into a few niche applications that may not require the degree of accuracy and precision essential for pharmaceutical analysis. However, because high-resolution and precision digital refractometers have replaced manual instruments, reducing time and resource consumption, the method appears particularly attractive from an economic, ergonomic, and environmental viewpoint. The sample solution can be measured without dilution or other preparation procedures than the separation of the protein-free matrix by ultrafiltration, which might even be omitted for a constant matrix and excipient composition. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  1. Test stand for precise measurement of impulse and thrust vector of small attitude control jets

    NASA Technical Reports Server (NTRS)

    Woodruff, J. R.; Chisel, D. M.

    1973-01-01

    A test stand which accurately measures the impulse bit and thrust vector of reaction jet thrusters used in the attitude control system of space vehicles has been developed. It can be used to measure, in a vacuum or ambient environment, both impulse and thrust vector of reaction jet thrusters using hydrazine or inert gas propellants. The ballistic pendulum configuration was selected because of its accuracy, simplicity, and versatility. The pendulum is mounted on flexure pivots rotating about a vertical axis at the center of its mass. The test stand has the following measurement capabilities: impulse of 0.00004 to 4.4 N-sec (0.00001 to 1.0 lb-sec) with a pulse duration of 0.5 msec to 1 sec; static thrust of 0.22 to 22 N (0.05 to 5 lb) with a 5 percent resolution; and thrust angle alinement of 0.22 to 22 N (0.05 to 5 lb) thrusters with 0.01 deg accuracy.

  2. Avascular necrosis (AVN) of the proximal fragment in scaphoid nonunion: is intravenous contrast agent necessary in MRI?

    PubMed

    Schmitt, R; Christopoulos, G; Wagner, M; Krimmer, H; Fodor, S; van Schoonhoven, J; Prommersberger, K J

    2011-02-01

    The purpose of this prospective study is to assess the diagnostic value of intravenously applied contrast agent for diagnosing osteonecrosis of the proximal fragment in scaphoid nonunion, and to compare the imaging results with intraoperative findings. In 88 patients (7 women, 81 men) suffering from symptomatic scaphoid nonunion, preoperative MRI was performed (coronal PD-w FSE fs, sagittal-oblique T1-w SE nonenhanced and T1-w SE fs contrast-enhanced, sagittal T2*-w GRE). MRI interpretation was based on the intensity of contrast enhancement: 0 = none, 1 = focal, 2 = diffuse. Intraoperatively, the osseous viability was scored by means of bleeding points on the osteotomy site of the proximal scaphoid fragment: 0=absent, 1 = moderate, 2 = good. Intraoperatively, 17 necrotic, 29 compromised, and 42 normal proximal fragments were found. In nonenhanced MRI, bone viability was judged necrotic in 1 patient, compromised in 20 patients, and unaffected in 67 patients. Contrast-enhanced MRI revealed 14 necrotic, 21 compromised, and 53 normal proximal fragments. Judging surgical findings as the standard of reference, statistical analysis for nonenhanced MRI was: sensitivity 6.3%, specificity 100%, positive PV 100%, negative PV 82.6%, and accuracy 82.9%; statistics for contrast-enhanced MRI was: sensitivity 76.5%, specificity 98.6%, positive PV 92.9%, negative PV 94.6%, and accuracy 94.3%. Sensitivity for detecting avascular proximal fragments was significantly better (p<0.001) in contrast-enhanced MRI in comparison to nonenhanced MRI. Viability of the proximal fragment in scaphoid nonunion can be significantly better assessed with the use of contrast-enhanced MRI as compared to nonenhanced MRI. Bone marrow edema is an inferior indicator of osteonecrosis. Application of intravenous gadolinium is recommended for imaging scaphoid nonunion. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Computational and experimental investigation of free vibration and flutter of bridge decks

    NASA Astrophysics Data System (ADS)

    Helgedagsrud, Tore A.; Bazilevs, Yuri; Mathisen, Kjell M.; Øiseth, Ole A.

    2018-06-01

    A modified rigid-object formulation is developed, and employed as part of the fluid-object interaction modeling framework from Akkerman et al. (J Appl Mech 79(1):010905, 2012. https://doi.org/10.1115/1.4005072) to simulate free vibration and flutter of long-span bridges subjected to strong winds. To validate the numerical methodology, companion wind tunnel experiments have been conducted. The results show that the computational framework captures very precisely the aeroelastic behavior in terms of aerodynamic stiffness, damping and flutter characteristics. Considering its relative simplicity and accuracy, we conclude from our study that the proposed free-vibration simulation technique is a valuable tool in engineering design of long-span bridges.

  4. Operon-mapper: A Web Server for Precise Operon Identification in Bacterial and Archaeal Genomes.

    PubMed

    Taboada, Blanca; Estrada, Karel; Ciria, Ricardo; Merino, Enrique

    2018-06-19

    Operon-mapper is a web server that accurately, easily, and directly predicts the operons of any bacterial or archaeal genome sequence. The operon predictions are based on the intergenic distance of neighboring genes as well as the functional relationships of their protein-coding products. To this end, Operon-mapper finds all the ORFs within a given nucleotide sequence, along with their genomic coordinates, orthology groups, and functional relationships. We believe that Operon-mapper, due to its accuracy, simplicity and speed, as well as the relevant information that it generates, will be a useful tool for annotating and characterizing genomic sequences. http://biocomputo.ibt.unam.mx/operon_mapper/.

  5. Analytical and experimental design and analysis of an optimal processor for image registration

    NASA Technical Reports Server (NTRS)

    Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.

    1976-01-01

    The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.

  6. New generation of elastic network models.

    PubMed

    López-Blanco, José Ramón; Chacón, Pablo

    2016-04-01

    The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. MY SIRR: Minimalist agro-hYdrological model for Sustainable IRRigation management-Soil moisture and crop dynamics

    NASA Astrophysics Data System (ADS)

    Albano, Raffaele; Manfreda, Salvatore; Celano, Giuseppe

    The paper introduces a minimalist water-driven crop model for sustainable irrigation management using an eco-hydrological approach. Such model, called MY SIRR, uses a relatively small number of parameters and attempts to balance simplicity, accuracy, and robustness. MY SIRR is a quantitative tool to assess water requirements and agricultural production across different climates, soil types, crops, and irrigation strategies. The MY SIRR source code is published under copyleft license. The FOSS approach could lower the financial barriers of smallholders, especially in developing countries, in the utilization of tools for better decision-making on the strategies for short- and long-term water resource management.

  8. Voluntary Simplicity: A Lifestyle Option.

    ERIC Educational Resources Information Center

    Pestle, Ruth E.

    This guide provides practical ideas for incorporating the concept of voluntary simplicity into home economics classes. Discussed in the first chapter are the need to study voluntary simplicity, its potential contributions to home economics, and techniques and a questionnaire for measuring student attitudes toward the concept. The remaining…

  9. Low power and high accuracy spike sorting microprocessor with on-line interpolation and re-alignment in 90 nm CMOS process.

    PubMed

    Chen, Tung-Chien; Ma, Tsung-Chuan; Chen, Yun-Yu; Chen, Liang-Gee

    2012-01-01

    Accurate spike sorting is an important issue for neuroscientific and neuroprosthetic applications. The sorting of spikes depends on the features extracted from the neural waveforms, and a better sorting performance usually comes with a higher sampling rate (SR). However for the long duration experiments on free-moving subjects, the miniaturized and wireless neural recording ICs are the current trend, and the compromise on sorting accuracy is usually made by a lower SR for the lower power consumption. In this paper, we implement an on-chip spike sorting processor with integrated interpolation hardware in order to improve the performance in terms of power versus accuracy. According to the fabrication results in 90nm process, if the interpolation is appropriately performed during the spike sorting, the system operated at the SR of 12.5 k samples per second (sps) can outperform the one not having interpolation at 25 ksps on both accuracy and power.

  10. The Effects of Observation and Intervention on the Judgment of Causal and Correlational Relationships

    DTIC Science & Technology

    2009-07-28

    further referred to as normative models of causation. A second type of model, which are based on Pavlovian classical conditioning , is associative... conditions of high cognitive load), the likelihood of the accuracy of the perception is compromised. If an inaccurate perception translates to an inaccurate...correlation and causation detection in specific military operations and under conditions of operational stress. Background Models of correlation

  11. The Development, Pilot, and Field Test of the Core HIV/AIDS Knowledge Assessment for Undergraduate and Graduate Students in Counseling-Related Degree Programs

    ERIC Educational Resources Information Center

    Acklin, Carrie

    2016-01-01

    The purpose of this study was to develop a core HIV/AIDS knowledge assessment (CHAKA) for students enrolled in counseling-related degree programs. Although there are studies that examined counseling HIV/AIDS knowledge, the instruments that were used were limited in ways that may compromise the accuracy of the inferences that were made. This study…

  12. Firmware Development Improves System Efficiency

    NASA Technical Reports Server (NTRS)

    Chern, E. James; Butler, David W.

    1993-01-01

    Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.

  13. Development and comparison of two devices for treatment of onychomycosis by photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Silva, Ana Paula da; Chiandrone, Daniel José; Tinta, Jefferson Wanderson Rossi; Kurachi, Cristina; Inada, Natalia Mayumi; Bagnato, Vanderlei Salvador

    2015-06-01

    Onychomycosis is the most common nail disorder. The treatment for this type of infection is one of the main difficult ones in clinical practice, due to the fact that the nails are nonvascularized structures, which compromise the penetration of drugs delivered systemically and favor slow nail growth. We present two devices based on light-emitting diode arrays as light sources for the treatment of onychomycosis by photodynamic therapy (PDT). PDT is an emerging technique that uses a photosensitizer (PS) activated by light in the presence of oxygen. The PS absorbs energy from light and transfers it to oxygen, producing reactive oxygen species such as hydroxyl radicals, superoxide, and singlet oxygen which inactivate fungi and bacteria. Our proposal is the use of a portable and secure light source device in patients with onychomycosis. Additional advantages are the low cost involved, the possibility of topical treatment rather than systemic and the simplicity of operation. These advantages are important to ensure the implementation of this technology for the treatment of an impacting health problem.

  14. Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework

    NASA Astrophysics Data System (ADS)

    Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.

    2018-01-01

    Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).

  15. Coaching with Simplicity: Thoreau and Sport

    ERIC Educational Resources Information Center

    Hochstetler, Doug

    2004-01-01

    Simplicity, as espoused by American philosopher Henry David Thoreau, is a method of removing unnecessary obstacles, a tangible means to attain a higher life, one of crystallization and transcendence. A complex profession such as coaching stands to greatly benefit from this concept. The purpose of this paper is to apply simplicity to coaching. A…

  16. Minimising human error in malaria rapid diagnosis: clarity of written instructions and health worker performance.

    PubMed

    Rennie, Waverly; Phetsouvanh, Rattanaxay; Lupisan, Socorro; Vanisaveth, Viengsay; Hongvanthong, Bouasy; Phompida, Samlane; Alday, Portia; Fulache, Mila; Lumagui, Richard; Jorgensen, Pernille; Bell, David; Harvey, Steven

    2007-01-01

    The usefulness of rapid diagnostic tests (RDT) in malaria case management depends on the accuracy of the diagnoses they provide. Despite their apparent simplicity, previous studies indicate that RDT accuracy is highly user-dependent. As malaria RDTs will frequently be used in remote areas with little supervision or support, minimising mistakes is crucial. This paper describes the development of new instructions (job aids) to improve health worker performance, based on observations of common errors made by remote health workers and villagers in preparing and interpreting RDTs, in the Philippines and Laos. Initial preparation using the instructions provided by the manufacturer was poor, but improved significantly with the job aids (e.g. correct use both of the dipstick and cassette increased in the Philippines by 17%). However, mistakes in preparation remained commonplace, especially for dipstick RDTs, as did mistakes in interpretation of results. A short orientation on correct use and interpretation further improved accuracy, from 70% to 80%. The results indicate that apparently simple diagnostic tests can be poorly performed and interpreted, but provision of clear, simple instructions can reduce these errors. Preparation of appropriate instructions and training as well as monitoring of user behaviour are an essential part of rapid test implementation.

  17. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy

    PubMed Central

    Monaghan, Kieran A.

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an error variance around an average deviation. Following standardized protocols and assigning precise reference conditions, the error variance of their comparative ratio (test-site:reference) can be measured and used to estimate the accuracy of the resultant assessment. PMID:27392036

  18. Recognition and Sensing of Creatinine.

    PubMed

    Guinovart, Tomàs; Hernández-Alonso, Daniel; Adriaenssens, Louis; Blondeau, Pascal; Martínez-Belmonte, Marta; Rius, F Xavier; Andrade, Francisco J; Ballester, Pablo

    2016-02-12

    Current methods for creatinine quantification suffer from significant drawbacks when aiming to combine accuracy, simplicity, and affordability. Here, an unprecedented synthetic receptor, an aryl-substituted calix[4]pyrrole with a monophosphonate bridge, is reported that displays remarkable affinity for creatinine and the creatininium cation. The receptor works by including the guest in its deep and polar aromatic cavity and establishing directional interactions in three dimensions. When incorporated into a suitable polymeric membrane, this molecule acts as an ionophore. A highly sensitive and selective potentiometric sensor suitable for the determination of creatinine levels in biological fluids, such as urine or plasma, in an accurate, fast, simple, and cost-effective way has thus been developed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A Method for Computing Leading-Edge Loads

    NASA Technical Reports Server (NTRS)

    Rhode, Richard V; Pearson, Henry A

    1933-01-01

    In this report a formula is developed that enables the determination of the proper design load for the portion of the wing forward of the front spar. The formula is inherently rational in concept, as it takes into account the most important variables that affect the leading-edge load, although theoretical rigor has been sacrificed for simplicity and ease of application. Some empirical corrections, based on pressure distribution measurements on the PW-9 and M-3 airplanes have been introduced to provide properly for biplanes. Results from the formula check experimental values in a variety of cases with good accuracy in the critical loading conditions. The use of the method for design purposes is therefore felt to be justified and is recommended.

  20. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James F.; Cobb, Ernest D.; Kilpatrick, F.A.

    1986-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The advantages of dye tracing are (1) low detection and measurement limits and (2) simplicity and accuracy in measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section on aerial photography is included because of its possible use to supplement ground-level fluorometry.

  1. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James F.

    1968-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The advantages of dye tracing are (1) low detection and measurement limits and (2) simplicity and accuracy in measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section on aerial photography is included because of its possible use to supplement ground-level fluorometry.

  2. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James E.; Cobb, Ernest D.; Kilpatrick, Frederick A.

    1984-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The outstanding characteristics of dye tracing are: (1) the low detection and measurement limits, and (2) the simplicity and accuracy of measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a general guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section is included on aerial photography because of its possible use to supplement ground-level fluorometry.

  3. New methodology for the thermal characterization of thermoelectric liquids

    NASA Astrophysics Data System (ADS)

    Touati, Karim; Depriester, Michael; Kuriakose, Maju; Hadj Sahraoui, Abdelhak

    2015-09-01

    A new and accurate method for the thermal characterization of thermoelectric liquids is proposed. The experiment is based on a self-generated voltage due to the Seebeck effect. This voltage is provided by the sample when one of its two faces is thermally excited using a modulated laser. The sample used is tetradodecylammonium nitrate salt/1-octanol mixture, with high Seebeck coefficient. The thermal properties of the used sample (thermal diffusivity, effusivity, and conductivity) are found and compared to those obtained by other photothermal techniques. In addition to this, a study of the electrolyte thermal parameters with the variation of tetradodecylammonium nitrate concentration was also carried out. This new method is promising due to its accuracy and its simplicity.

  4. The simplicity principle in perception and cognition

    PubMed Central

    Feldman, Jacob

    2016-01-01

    The simplicity principle, traditionally referred to as Occam’s razor, is the idea that simpler explanations of observations should be preferred to more complex ones. In recent decades the principle has been clarified via the incorporation of modern notions of computation and probability, allowing a more precise understanding of how exactly complexity minimization facilitates inference. The simplicity principle has found many applications in modern cognitive science, in contexts as diverse as perception, categorization, reasoning, and neuroscience. In all these areas, the common idea is that the mind seeks the simplest available interpretation of observations— or, more precisely, that it balances a bias towards simplicity with a somewhat opposed constraint to choose models consistent with perceptual or cognitive observations. This brief tutorial surveys some of the uses of the simplicity principle across cognitive science, emphasizing how complexity minimization in a number of forms has been incorporated into probabilistic models of inference. PMID:27470193

  5. Classification of AB O 3 perovskite solids: a machine learning study

    DOE PAGES

    Pilania, G.; Balachandran, P. V.; Gubernatis, J. E.; ...

    2015-07-23

    Here we explored the use of machine learning methods for classifying whether a particularABO 3chemistry forms a perovskite or non-perovskite structured solid. Starting with three sets of feature pairs (the tolerance and octahedral factors, theAandBionic radii relative to the radius of O, and the bond valence distances between theAandBions from the O atoms), we used machine learning to create a hyper-dimensional partial dependency structure plot using all three feature pairs or any two of them. Doing so increased the accuracy of our predictions by 2–3 percentage points over using any one pair. We also included the Mendeleev numbers of theAandBatomsmore » to this set of feature pairs. Moreover, doing this and using the capabilities of our machine learning algorithm, the gradient tree boosting classifier, enabled us to generate a new type of structure plot that has the simplicity of one based on using just the Mendeleev numbers, but with the added advantages of having a higher accuracy and providing a measure of likelihood of the predicted structure.« less

  6. Application of neural networks with novel independent component analysis methodologies to a Prussian blue modified glassy carbon electrode array.

    PubMed

    Wang, Liang; Yang, Die; Fang, Cheng; Chen, Zuliang; Lesniewski, Peter J; Mallavarapu, Megharaj; Naidu, Ravendra

    2015-01-01

    Sodium potassium absorption ratio (SPAR) is an important measure of agricultural water quality, wherein four exchangeable cations (K(+), Na(+), Ca(2+) and Mg(2+)) should be simultaneously determined. An ISE-array is suitable for this application because its simplicity, rapid response characteristics and lower cost. However, cross-interferences caused by the poor selectivity of ISEs need to be overcome using multivariate chemometric methods. In this paper, a solid contact ISE array, based on a Prussian blue modified glassy carbon electrode (PB-GCE), was applied with a novel chemometric strategy. One of the most popular independent component analysis (ICA) methods, the fast fixed-point algorithm for ICA (fastICA), was implemented by the genetic algorithm (geneticICA) to avoid the local maxima problem commonly observed with fastICA. This geneticICA can be implemented as a data preprocessing method to improve the prediction accuracy of the Back-propagation neural network (BPNN). The ISE array system was validated using 20 real irrigation water samples from South Australia, and acceptable prediction accuracies were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Simulation-based investigation of the generality of Lyzenga's multispectral bathymetry formula in Case-1 coral reef water

    NASA Astrophysics Data System (ADS)

    Manessa, Masita Dwi Mandini; Kanno, Ariyo; Sagawa, Tatsuyuki; Sekine, Masahiko; Nurdin, Nurjannah

    2018-01-01

    Lyzenga's multispectral bathymetry formula has attracted considerable interest due to its simplicity. However, there has been little discussion of the effect that variation in optical conditions and bottom types-which commonly appears in coral reef environments-has on this formula's results. The present paper evaluates Lyzenga's multispectral bathymetry formula for a variety of optical conditions and bottom types. A noiseless dataset of above-water remote sensing reflectance from WorldView-2 images over Case-1 shallow coral reef water is simulated using a radiative transfer model. The simulation-based assessment shows that Lyzenga's formula performs robustly, with adequate generality and good accuracy, under a range of conditions. As expected, the influence of bottom type on depth estimation accuracy is far greater than the influence of other optical parameters, i.e., chlorophyll-a concentration and solar zenith angle. Further, based on the simulation dataset, Lyzenga's formula estimates depth when the bottom type is unknown almost as accurately as when the bottom type is known. This study provides a better understanding of Lyzenga's multispectral bathymetry formula under various optical conditions and bottom types.

  8. Comparison of Factor Simplicity Indices for Dichotomous Data: DETECT R, Bentler's Simplicity Index, and the Loading Simplicity Index

    ERIC Educational Resources Information Center

    Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick

    2008-01-01

    A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…

  9. Composite Bloom Filters for Secure Record Linkage.

    PubMed

    Durham, Elizabeth Ashley; Kantarcioglu, Murat; Xue, Yuan; Toth, Csaba; Kuzu, Mehmet; Malin, Bradley

    2014-12-01

    The process of record linkage seeks to integrate instances that correspond to the same entity. Record linkage has traditionally been performed through the comparison of identifying field values ( e.g., Surname ), however, when databases are maintained by disparate organizations, the disclosure of such information can breach the privacy of the corresponding individuals. Various private record linkage (PRL) methods have been developed to obscure such identifiers, but they vary widely in their ability to balance competing goals of accuracy, efficiency and security. The tokenization and hashing of field values into Bloom filters (BF) enables greater linkage accuracy and efficiency than other PRL methods, but the encodings may be compromised through frequency-based cryptanalysis. Our objective is to adapt a BF encoding technique to mitigate such attacks with minimal sacrifices in accuracy and efficiency. To accomplish these goals, we introduce a statistically-informed method to generate BF encodings that integrate bits from multiple fields, the frequencies of which are provably associated with a minimum number of fields. Our method enables a user-specified tradeoff between security and accuracy. We compare our encoding method with other techniques using a public dataset of voter registration records and demonstrate that the increases in security come with only minor losses to accuracy.

  10. Composite Bloom Filters for Secure Record Linkage

    PubMed Central

    Durham, Elizabeth Ashley; Kantarcioglu, Murat; Xue, Yuan; Toth, Csaba; Kuzu, Mehmet; Malin, Bradley

    2014-01-01

    The process of record linkage seeks to integrate instances that correspond to the same entity. Record linkage has traditionally been performed through the comparison of identifying field values (e.g., Surname), however, when databases are maintained by disparate organizations, the disclosure of such information can breach the privacy of the corresponding individuals. Various private record linkage (PRL) methods have been developed to obscure such identifiers, but they vary widely in their ability to balance competing goals of accuracy, efficiency and security. The tokenization and hashing of field values into Bloom filters (BF) enables greater linkage accuracy and efficiency than other PRL methods, but the encodings may be compromised through frequency-based cryptanalysis. Our objective is to adapt a BF encoding technique to mitigate such attacks with minimal sacrifices in accuracy and efficiency. To accomplish these goals, we introduce a statistically-informed method to generate BF encodings that integrate bits from multiple fields, the frequencies of which are provably associated with a minimum number of fields. Our method enables a user-specified tradeoff between security and accuracy. We compare our encoding method with other techniques using a public dataset of voter registration records and demonstrate that the increases in security come with only minor losses to accuracy. PMID:25530689

  11. Autobalanced Ramsey Spectroscopy

    NASA Astrophysics Data System (ADS)

    Sanner, Christian; Huntemann, Nils; Lange, Richard; Tamm, Christian; Peik, Ekkehard

    2018-01-01

    We devise a perturbation-immune version of Ramsey's method of separated oscillatory fields. Spectroscopy of an atomic clock transition without compromising the clock's accuracy is accomplished by actively balancing the spectroscopic responses from phase-congruent Ramsey probe cycles of unequal durations. Our simple and universal approach eliminates a wide variety of interrogation-induced line shifts often encountered in high precision spectroscopy, among them, in particular, light shifts, phase chirps, and transient Zeeman shifts. We experimentally demonstrate autobalanced Ramsey spectroscopy on the light shift prone Yb+ 171 electric octupole optical clock transition and show that interrogation defects are not turned into clock errors. This opens up frequency accuracy perspectives below the 10-18 level for the Yb+ system and for other types of optical clocks.

  12. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    DOE PAGES

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; ...

    2012-01-01

    Objective . Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods . We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results . Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relativemore » accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions . When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.« less

  13. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    PubMed Central

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki

    2012-01-01

    Objective. Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs. PMID:22505949

  14. Protecting privacy of shared epidemiologic data without compromising analysis potential.

    PubMed

    Cologne, John; Grant, Eric J; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki

    2012-01-01

    Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.

  15. An assessment of the potential of PFEM-2 for solving long real-time industrial applications

    NASA Astrophysics Data System (ADS)

    Gimenez, Juan M.; Ramajo, Damián E.; Márquez Damián, Santiago; Nigro, Norberto M.; Idelsohn, Sergio R.

    2017-07-01

    The latest generation of the particle finite element method (PFEM-2) is a numerical method based on the Lagrangian formulation of the equations, which presents advantages in terms of robustness and efficiency over classical Eulerian methodologies when certain kind of flows are simulated, especially those where convection plays an important role. These situations are often encountered in real engineering problems, where very complex geometries and operating conditions require very large and long computations. The advantages that the parallelism introduced in the computational fluid dynamics making affordable computations with very fine spatial discretizations are well known. However, it is not possible to have the time parallelized, despite the effort that is being dedicated to use space-time formulations. In this sense, PFEM-2 adds a valuable feature in that its strong stability with little loss of accuracy provides an interesting way of satisfying the real-life computation needs. After having already demonstrated in previous publications its ability to achieve academic-based solutions with a good compromise between accuracy and efficiency, in this work, the method is revisited and employed to solve several nonacademic problems of technological interest, which fall into that category. Simulations concerning oil-water separation, waste-water treatment, metallurgical foundries, and safety assessment are presented. These cases are selected due to their particular requirements of long simulation times and or intensive interface treatment. Thus, large time-steps may be employed with PFEM-2 without compromising the accuracy and robustness of the simulation, as occurs with Eulerian alternatives, showing the potentiality of the methodology for solving not only academic tests but also real engineering problems.

  16. Evaluation and comparison of diffusion MR methods for measuring apparent transcytolemmal water exchange rate constant

    NASA Astrophysics Data System (ADS)

    Tian, Xin; Li, Hua; Jiang, Xiaoyu; Xie, Jingping; Gore, John C.; Xu, Junzhong

    2017-02-01

    Two diffusion-based approaches, CG (constant gradient) and FEXI (filtered exchange imaging) methods, have been previously proposed for measuring transcytolemmal water exchange rate constant kin, but their accuracy and feasibility have not been comprehensively evaluated and compared. In this work, both computer simulations and cell experiments in vitro were performed to evaluate these two methods. Simulations were done with different cell diameters (5, 10, 20 μm), a broad range of kin values (0.02-30 s-1) and different SNR's, and simulated kin's were directly compared with the ground truth values. Human leukemia K562 cells were cultured and treated with saponin to selectively change cell transmembrane permeability. The agreement between measured kin's of both methods was also evaluated. The results suggest that, without noise, the CG method provides reasonably accurate estimation of kin especially when it is smaller than 10 s-1, which is in the typical physiological range of many biological tissues. However, although the FEXI method overestimates kin even with corrections for the effects of extracellular water fraction, it provides reasonable estimates with practical SNR's and more importantly, the fitted apparent exchange rate AXR showed approximately linear dependence on the ground truth kin. In conclusion, either CG or FEXI method provides a sensitive means to characterize the variations in transcytolemmal water exchange rate constant kin, although the accuracy and specificity is usually compromised. The non-imaging CG method provides more accurate estimation of kin, but limited to large volume-of-interest. Although the accuracy of FEXI is compromised with extracellular volume fraction, it is capable of spatially mapping kin in practice.

  17. Polyethylene glycol versus dual sugar assay for gastrointestinal permeability analysis: is it time to choose?

    PubMed

    van Wijck, Kim; Bessems, Babs Afm; van Eijk, Hans Mh; Buurman, Wim A; Dejong, Cornelis Hc; Lenaerts, Kaatje

    2012-01-01

    Increased intestinal permeability is an important measure of disease activity and prognosis. Currently, many permeability tests are available and no consensus has been reached as to which test is most suitable. The aim of this study was to compare urinary probe excretion and accuracy of a polyethylene glycol (PEG) assay and dual sugar assay in a double-blinded crossover study to evaluate probe excretion and the accuracy of both tests. Gastrointestinal permeability was measured in nine volunteers using PEG 400, PEG 1500, and PEG 3350 or lactulose-rhamnose. On 4 separate days, permeability was analyzed after oral intake of placebo or indomethacin, a drug known to increase intestinal permeability. Plasma intestinal fatty acid binding protein and calprotectin levels were determined to verify compromised intestinal integrity after indomethacin consumption. Urinary samples were collected at baseline, hourly up to 5 hours after probe intake, and between 5 and 24 hours. Urinary excretion of PEG and sugars was determined using high-pressure liquid chromatography-evaporative light scattering detection and liquid chromatography-mass spectrometry, respectively. Intake of indomethacin increased plasma intestinal fatty acid-binding protein and calprotectin levels, reflecting loss of intestinal integrity and inflammation. In this state of indomethacin-induced gastrointestinal compromise, urinary excretion of the three PEG probes and lactulose increased compared with placebo. Urinary PEG 400 excretion, the PEG 3350/PEG 400 ratio, and the lactulose/rhamnose ratio could accurately detect indomethacin-induced increases in gastrointestinal permeability, especially within 2 hours of probe intake. Hourly urinary excretion and diagnostic accuracy of PEG and sugar probes show high concordance for detection of indomethacin-induced increases in gastrointestinal permeability. This comparative study improves our knowledge of permeability analysis in man by providing a clear overview of both tests and demonstrates equivalent performance in the current setting.

  18. Remote real-time monitoring of free flaps via smartphone photography and 3G wireless Internet: a prospective study evidencing diagnostic accuracy.

    PubMed

    Engel, Holger; Huang, Jung Ju; Tsao, Chung Kan; Lin, Chia-Yu; Chou, Pan-Yu; Brey, Eric M; Henry, Steven L; Cheng, Ming Huei

    2011-11-01

    This prospective study was designed to compare the accuracy rate between remote smartphone photographic assessments and in-person examinations for free flap monitoring. One hundred and three consecutive free flaps were monitored with in-person examinations and assessed remotely by three surgeons (Team A) via photographs transmitted over smartphone. Four other surgeons used the traditional in-person examinations as Team B. The response time to re-exploration was defined as the interval between when a flap was evaluated as compromised by the nurse/house officer and when the decision was made for re-exploration. The accuracy rate was 98.7% and 94.2% for in-person and smartphone photographic assessments, respectively. The response time of 8 ± 3 min in Team A was statistically shorter than the 180 ± 104 min in Team B (P = 0.01 by the Mann-Whitney test). The remote smartphone photography assessment has a comparable accuracy rate and shorter response time compared with in-person examination for free flap monitoring. Copyright © 2011 Wiley Periodicals, Inc.

  19. Reinforced Concrete Modeling

    DTIC Science & Technology

    1982-07-01

    micro- cracks within the material . This microcracking causes permanent deformation and a loss in stiffness similar to the strain hardening seen in metals...approached. Dilatation is caused by the tendency of shear stresses to open cracks in a microcracked, brittle material . 10 1.2 I , e3 3 1.0 F i ,S22 e...situation would be for a user to compromise some accuracy based on what features of a material are of the most importance for the analysis involved

  20. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  1. One Size Does Not Fit All: A System Development Perspective

    DTIC Science & Technology

    2013-09-09

    Postgraduate School in Monterey, CA. LCDR LaSalle graduated with an Associate of Science degree in nutrition and culinary arts from Johnson & Wales...designs throughout the project. 10. Simplicity is the art of maximizing work done: Simplicity is essential. As Cockburn (2002) stated, “Simplicity...The fifth discipline: The art and practice of the learning organization. New York, NY: Doubleday/Currency. Sengupta, K., Van Oorschot, K. E., & Van

  2. Lagrangian approach to the Barrett-Crane spin foam model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonzom, Valentin; Laboratoire de Physique, ENS Lyon, CNRS UMR 5672, 46 Allee d'Italie, 69007 Lyon; Livine, Etera R.

    2009-03-15

    We provide the Barrett-Crane spin foam model for quantum gravity with a discrete action principle, consisting in the usual BF term with discretized simplicity constraints which in the continuum turn topological BF theory into gravity. The setting is the same as usually considered in the literature: space-time is cut into 4-simplices, the connection describes how to glue these 4-simplices together and the action is a sum of terms depending on the holonomies around each triangle. We impose the discretized simplicity constraints on disjoint tetrahedra and we show how the Lagrange multipliers distort the parallel transport and the correlations between neighboringmore » simplices. We then construct the discretized BF action using a noncommutative * product between SU(2) plane waves. We show how this naturally leads to the Barrett-Crane model. This clears up the geometrical meaning of the model. We discuss the natural generalization of this action principle and the spin foam models it leads to. We show how the recently introduced spin foam fusion coefficients emerge with a nontrivial measure. In particular, we recover the Engle-Pereira-Rovelli spin foam model by weakening the discretized simplicity constraints. Finally, we identify the two sectors of Plebanski's theory and we give the analog of the Barrett-Crane model in the nongeometric sector.« less

  3. In enterovirus 71 encephalitis with cardio-respiratory compromise, elevated interleukin 1β, interleukin 1 receptor antagonist, and granulocyte colony-stimulating factor levels are markers of poor prognosis.

    PubMed

    Griffiths, Michael J; Ooi, Mong H; Wong, See C; Mohan, Anand; Podin, Yuwana; Perera, David; Chieng, Chae H; Tio, Phaik H; Cardosa, Mary J; Solomon, Tom

    2012-09-15

    Enterovirus 71 (EV71) causes large outbreaks of hand, foot, and mouth disease (HFMD), with severe neurological complications and cardio-respiratory compromise, but the pathogenesis is poorly understood. We measured levels of 30 chemokines and cytokines in serum and cerebrospinal fluid (CSF) samples from Malaysian children hospitalized with EV71 infection (n = 88), comprising uncomplicated HFMD (n = 47), meningitis (n = 8), acute flaccid paralysis (n = 1), encephalitis (n = 21), and encephalitis with cardiorespiratory compromise (n = 11). Four of the latter patients died. Both pro-inflammatory and anti-inflammatory mediator levels were elevated, with different patterns of mediator abundance in the CSF and vascular compartments. Serum concentrations of interleukin 1β (IL-1β), interleukin 1 receptor antagonist (IL-1Ra), and granulocyte colony-stimulating factor (G-CSF) were raised significantly in patients who developed cardio-respiratory compromise (P = .013, P = .004, and P < .001, respectively). Serum IL-1Ra and G-CSF levels were also significantly elevated in patients who died, with a serum G-CSF to interleukin 5 ratio of >100 at admission being the most accurate prognostic marker for death (P < .001; accuracy, 85.5%; sensitivity, 100%; specificity, 84.7%). Given that IL-1β has a negative inotropic action on the heart, and that both its natural antagonist, IL-1Ra, and G-CSF are being assessed as treatments for acute cardiac impairment, the findings suggest we have identified functional markers of EV71-related cardiac dysfunction and potential treatment options.

  4. Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis

    NASA Technical Reports Server (NTRS)

    Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.

    1997-01-01

    While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.

  5. The subtle business of model reduction for stochastic chemical kinetics

    NASA Astrophysics Data System (ADS)

    Gillespie, Dan T.; Cao, Yang; Sanft, Kevin R.; Petzold, Linda R.

    2009-02-01

    This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S1⇌S2→S3, whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S3-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.

  6. Finite difference methods for reducing numerical diffusion in TEACH-type calculations. [Teaching Elliptic Axisymmetric Characteristics Heuristically

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.

    1985-01-01

    A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.

  7. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  8. Open data mining for Taiwan's dengue epidemic.

    PubMed

    Wu, ChienHsing; Kao, Shu-Chen; Shih, Chia-Hung; Kan, Meng-Hsuan

    2018-07-01

    By using a quantitative approach, this study examines the applicability of data mining technique to discover knowledge from open data related to Taiwan's dengue epidemic. We compare results when Google trend data are included or excluded. Data sources are government open data, climate data, and Google trend data. Research findings from analysis of 70,914 cases are obtained. Location and time (month) in open data show the highest classification power followed by climate variables (temperature and humidity), whereas gender and age show the lowest values. Both prediction accuracy and simplicity decrease when Google trends are considered (respectively 0.94 and 0.37, compared to 0.96 and 0.46). The article demonstrates the value of open data mining in the context of public health care. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Single Vector Calibration System for Multi-Axis Load Cells and Method for Calibrating a Multi-Axis Load Cell

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor)

    2003-01-01

    A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.

  10. Analytical investigation of different mathematical approaches utilizing manipulation of ratio spectra

    NASA Astrophysics Data System (ADS)

    Osman, Essam Eldin A.

    2018-01-01

    This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  11. Scale model testing of drogues for free drifting buoys

    NASA Technical Reports Server (NTRS)

    Vachon, W. A.

    1973-01-01

    Instrumented model drogue tests were conducted in a ship model towing tank. The purpose of the tests was to observe and measure deployment and drag characteristics of such shapes as parachutes, crossed vanes, and window shades which may be employed in conjunction with free drifting buoys. Both Froude and Reynolds scaling laws were applied while scaling to full scale relative velocities of from 0 to 0.2 knots. A weighted window shade drogue is recommended because of its performance, high drag coefficient, simplicity, and low cost. Detailed theoretical performance curves are presented for parachutes, crossed vanes, and window shade drogues. Theoretical estimates of depth locking accuracy and buoy-induced dynamic loads pertinent to window shade drogues are presented as a design aid. An example of a window shade drogue design is presented.

  12. A unified electrostatic and cavitation model for first-principles molecular dynamics in solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherlis, D A; Fattebert, J; Gygi, F

    2005-11-14

    The electrostatic continuum solvent model developed by Fattebert and Gygi is combined with a first-principles formulation of the cavitation energy based on a natural quantum-mechanical definition for the surface of a solute. Despite its simplicity, the cavitation contribution calculated by this approach is found to be in remarkable agreement with that obtained by more complex algorithms relying on a large set of parameters. The model allows for very efficient Car-Parrinello simulations of finite or extended systems in solution, and demonstrates a level of accuracy as good as that of established quantum-chemistry continuum solvent methods. They apply this approach to themore » study of tetracyanoethylene dimers in dichloromethane, providing valuable structural and dynamical insights on the dimerization phenomenon.« less

  13. Object extraction method for image synthesis

    NASA Astrophysics Data System (ADS)

    Inoue, Seiki

    1991-11-01

    The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.

  14. The subtle business of model reduction for stochastic chemical kinetics.

    PubMed

    Gillespie, Dan T; Cao, Yang; Sanft, Kevin R; Petzold, Linda R

    2009-02-14

    This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S(1)<=>S(2)-->S(3), whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S(3)-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.

  15. Accurate collision-induced line-coupling parameters for the fundamental band of CO in He - Close coupling and coupled states scattering calculations

    NASA Technical Reports Server (NTRS)

    Green, Sheldon; Boissoles, J.; Boulet, C.

    1988-01-01

    The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.

  16. The Osher scheme for non-equilibrium reacting flows

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1992-01-01

    An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.

  17. MedEx/J: A One-Scan Simple and Fast NLP Tool for Japanese Clinical Texts.

    PubMed

    Aramaki, Eiji; Yano, Ken; Wakamiya, Shoko

    2017-01-01

    Because of recent replacement of physical documents with electronic medical records (EMR), the importance of information processing in the medical field has increased. In light of this trend, we have been developing MedEx/J, which retrieves important Japanese language information from medical reports. MedEx/J executes two tasks simultaneously: (1) term extraction, and (2) positive and negative event classification. We designate this approach as a one-scan approach, providing simplicity of systems and reasonable accuracy. MedEx/J performance on the two tasks is described herein: (1) term extraction (Fβ = 1 = 0.87) and (2) positive-negative classification (Fβ = 1 = 0.63). This paper also presents discussion and explains remaining issues in the medical natural language processing field.

  18. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  19. A Study Protocol for Testing the Effectiveness of User-Generated Content in Reducing Excessive Consumption

    PubMed Central

    Herziger, Atar; Benzerga, Amel; Berkessel, Jana; Dinartika, Niken L.; Franklin, Matija; Steinnes, Kamilla K.; Sundström, Felicia

    2017-01-01

    Excessive consumption is on the rise, as is apparent in growing financial debt and global greenhouse gas emissions. Voluntary simplicity, a lifestyle choice of reduced consumption and sustainable consumer behavior, provides a potential solution for excessive consumers. However, voluntary simplicity is unpopular, difficult to adopt, and under researched. The outlined research project will test a method of promoting voluntary simplicity via user-generated content, thus mimicking an existing social media trend (Minimalism) in an empirical research design. The project will test (a) whether the Minimalism trend could benefit consumers interested in reducing their consumption, and (b) whether self-transcendence (i.e., biospheric) and self-enhancement (i.e., egoistic and hedonic) values and goals have a similar impact in promoting voluntary simplicity. A one-week intervention program will test the efficacy of watching user-generated voluntary simplicity videos in reducing non-essential consumption. Each of the two intervention conditions will present participants with similar tutorial videos on consumption reduction (e.g., decluttering, donating), while priming the relevant values and goals (self-transcendence or self-enhancement). These interventions will be compared to a control condition, involving no user-generated content. Participants will undergo baseline and post-intervention evaluations of: voluntary simplicity attitudes and behaviors, buying and shopping behaviors, values and goals in reducing consumption, and life satisfaction. Experience sampling will monitor affective state during the intervention. We provide a detailed stepwise procedure, materials, and equipment necessary for executing this intervention. The outlined research design is expected to contribute to the limited literature on voluntary simplicity, online behavioral change interventions, and the use of social marketing principles in consumer interventions. PMID:28649220

  20. A Study Protocol for Testing the Effectiveness of User-Generated Content in Reducing Excessive Consumption.

    PubMed

    Herziger, Atar; Benzerga, Amel; Berkessel, Jana; Dinartika, Niken L; Franklin, Matija; Steinnes, Kamilla K; Sundström, Felicia

    2017-01-01

    Excessive consumption is on the rise, as is apparent in growing financial debt and global greenhouse gas emissions. Voluntary simplicity, a lifestyle choice of reduced consumption and sustainable consumer behavior, provides a potential solution for excessive consumers. However, voluntary simplicity is unpopular, difficult to adopt, and under researched. The outlined research project will test a method of promoting voluntary simplicity via user-generated content, thus mimicking an existing social media trend (Minimalism) in an empirical research design. The project will test (a) whether the Minimalism trend could benefit consumers interested in reducing their consumption, and (b) whether self-transcendence (i.e., biospheric) and self-enhancement (i.e., egoistic and hedonic) values and goals have a similar impact in promoting voluntary simplicity. A one-week intervention program will test the efficacy of watching user-generated voluntary simplicity videos in reducing non-essential consumption. Each of the two intervention conditions will present participants with similar tutorial videos on consumption reduction (e.g., decluttering, donating), while priming the relevant values and goals (self-transcendence or self-enhancement). These interventions will be compared to a control condition, involving no user-generated content. Participants will undergo baseline and post-intervention evaluations of: voluntary simplicity attitudes and behaviors, buying and shopping behaviors, values and goals in reducing consumption, and life satisfaction. Experience sampling will monitor affective state during the intervention. We provide a detailed stepwise procedure, materials, and equipment necessary for executing this intervention. The outlined research design is expected to contribute to the limited literature on voluntary simplicity, online behavioral change interventions, and the use of social marketing principles in consumer interventions.

  1. Effectiveness of a Rapid Lumbar Spine MRI Protocol Using 3D T2-Weighted SPACE Imaging Versus a Standard Protocol for Evaluation of Degenerative Changes of the Lumbar Spine.

    PubMed

    Sayah, Anousheh; Jay, Ann K; Toaff, Jacob S; Makariou, Erini V; Berkowitz, Frank

    2016-09-01

    Reducing lumbar spine MRI scanning time while retaining diagnostic accuracy can benefit patients and reduce health care costs. This study compares the effectiveness of a rapid lumbar MRI protocol using 3D T2-weighted sampling perfection with application-optimized contrast with different flip-angle evolutions (SPACE) sequences with a standard MRI protocol for evaluation of lumbar spondylosis. Two hundred fifty consecutive unenhanced lumbar MRI examinations performed at 1.5 T were retrospectively reviewed. Full, rapid, and complete versions of each examination were interpreted for spondylotic changes at each lumbar level, including herniations and neural compromise. The full examination consisted of sagittal T1-weighted, T2-weighted turbo spin-echo (TSE), and STIR sequences; and axial T1- and T2-weighted TSE sequences (time, 18 minutes 40 seconds). The rapid examination consisted of sagittal T1- and T2-weighted SPACE sequences, with axial SPACE reformations (time, 8 minutes 46 seconds). The complete examination consisted of the full examination plus the T2-weighted SPACE sequence. Sensitivities and specificities of the full and rapid examinations were calculated using the complete study as the reference standard. The rapid and full studies had sensitivities of 76.0% and 69.3%, with specificities of 97.2% and 97.9%, respectively, for all degenerative processes. Rapid and full sensitivities were 68.7% and 66.3% for disk herniation, 85.2% and 81.5% for canal compromise, 82.9% and 69.1% for lateral recess compromise, and 76.9% and 69.7% for foraminal compromise, respectively. Isotropic SPACE T2-weighted imaging provides high-quality imaging of lumbar spondylosis, with multiplanar reformatting capability. Our SPACE-based rapid protocol had sensitivities and specificities for herniations and neural compromise comparable to those of the protocol without SPACE. This protocol fits within a 15-minute slot, potentially reducing costs and discomfort for a large subgroup of patients.

  2. The simplicity principle in perception and cognition.

    PubMed

    Feldman, Jacob

    2016-09-01

    The simplicity principle, traditionally referred to as Occam's razor, is the idea that simpler explanations of observations should be preferred to more complex ones. In recent decades the principle has been clarified via the incorporation of modern notions of computation and probability, allowing a more precise understanding of how exactly complexity minimization facilitates inference. The simplicity principle has found many applications in modern cognitive science, in contexts as diverse as perception, categorization, reasoning, and neuroscience. In all these areas, the common idea is that the mind seeks the simplest available interpretation of observations- or, more precisely, that it balances a bias toward simplicity with a somewhat opposed constraint to choose models consistent with perceptual or cognitive observations. This brief tutorial surveys some of the uses of the simplicity principle across cognitive science, emphasizing how complexity minimization in a number of forms has been incorporated into probabilistic models of inference. WIREs Cogn Sci 2016, 7:330-340. doi: 10.1002/wcs.1406 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  3. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    PubMed

    Vidyarthi, Ankit; Mittal, Namita

    2016-12-01

    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Hypernetworks Reveal Compound Variables That Capture Cooperative and Competitive Interactions in a Soccer Match.

    PubMed

    Ramos, João; Lopes, Rui J; Marques, Pedro; Araújo, Duarte

    2017-01-01

    The combination of sports sciences theorization and social networks analysis (SNA) has offered useful new insights for addressing team behavior. However, SNA typically represents the dynamics of team behavior during a match in dyadic interactions and in a single cumulative snapshot. This study aims to overcome these limitations by using hypernetworks to describe illustrative cases of team behavior dynamics at various other levels of analyses. Hypernetworks simultaneously access cooperative and competitive interactions between teammates and opponents across space and time during a match. Moreover, hypernetworks are not limited to dyadic relations, which are typically represented by edges in other types of networks. In a hypernetwork, n-ary relations (with n > 2) and their properties are represented with hyperedges connecting more than two players simultaneously (the so-called simplex -plural, simplices ). Simplices can capture the interactions of sets of players that may include an arbitrary number of teammates and opponents. In this qualitative study, we first used the mathematical formalisms of hypernetworks to represent a multilevel team behavior dynamics, including micro (interactions between players), meso (dynamics of a given critical event, e.g., an attack interaction), and macro (interactions between sets of players) levels. Second, we investigated different features that could potentially explain the occurrence of critical events, such as, aggregation or disaggregation of simplices relative to goal proximity. Finally, we applied hypernetworks analysis to soccer games from the English premier league (season 2010-2011) by using two-dimensional player displacement coordinates obtained with a multiple-camera match analysis system provided by STATS (formerly Prozone). Our results show that (i) at micro level the most frequently occurring simplices configuration is 1vs.1 (one attacker vs. one defender); (ii) at meso level, the dynamics of simplices transformations near the goal depends on significant changes in the players' speed and direction; (iii) at macro level, simplices are connected to one another, forming "simplices of simplices" including the goalkeeper and the goal. These results validate qualitatively that hypernetworks and related compound variables can capture and be used in the analysis of the cooperative and competitive interactions between players and sets of players in soccer matches.

  5. The accuracy of dynamic attitude propagation

    NASA Technical Reports Server (NTRS)

    Harvie, E.; Chu, D.; Woodard, M.

    1990-01-01

    Propagating attitude by integrating Euler's equation for rigid body motion has long been suggested for the Earth Radiation Budget Satellite (ERBS) but until now has not been implemented. Because of limited Sun visibility, propagation is necessary for yaw determination. With the deterioration of the gyros, dynamic propagation has become more attractive. Angular rates are derived from integrating Euler's equation with a stepsize of 1 second, using torques computed from telemetered control system data. The environmental torque model was quite basic. It included gravity gradient and unshadowed aerodynamic torques. Knowledge of control torques is critical to the accuracy of dynamic modeling. Due to their coarseness and sparsity, control actuator telemetry were smoothed before integration. The dynamic model was incorporated into existing ERBS attitude determination software. Modeled rates were then used for attitude propagation in the standard ERBS fine-attitude algorithm. In spite of the simplicity of the approach, the dynamically propagated attitude matched the attitude propagated with good gyros well for roll and yaw but diverged up to 3 degrees for pitch because of the very low resolution in pitch momentum wheel telemetry. When control anomalies significantly perturb the nominal attitude, the effect of telemetry granularity is reduced and the dynamically propagated attitudes are accurate on all three axes.

  6. Small‐angle X‐ray scattering as a useful supplementary technique to determine molecular masses of polyelectrolytes in solution

    PubMed Central

    Plazzotta, Beatrice; Diget, Jakob Stensgaard; Zhu, Kaizheng; Nyström, Bo

    2016-01-01

    ABSTRACT Determination of molecular masses of charged polymers is often nontrivial and most methods have their drawbacks. For polyelectrolytes, a new possibility for the determination of number‐average molecular masses is represented by small‐angle X‐ray scattering (SAXS) which allows fast determinations with a 10% accuracy. This is done by relating the mass to the position of a characteristic peak feature which arises in SAXS due to the local ordering caused by charge‐repulsions between polyelectrolytes. Advantages of the technique are the simplicity of data analysis, the independency from polymer architecture, and the low sample and time consumption. The method was tested on polyelectrolytes of various structures and chemical compositions, and the results were compared with those obtained from more conventional techniques, such as asymmetric flow field‐flow fractionation, gel permeation chromatography, and classical SAXS data analysis, showing that the accuracy of the suggested method is similar to that of the other techniques. © 2016 The Authors. Journal of Polymer Science Part B: Polymer Physics Published by Wiley Periodicals, Inc. J. Polym. Sci., Part B: Polym. Phys. 2016, 54, 1913–1917 PMID:27840558

  7. Theoretical and experimental study of mirrorless fiber optics refractometer based on quasi-Gaussian approach

    NASA Astrophysics Data System (ADS)

    Abdullah, M.; Krishnan, Ganesan; Saliman, Tiffany; Fakaruddin Sidi Ahmad, M.; Bidin, Noriah

    2018-03-01

    A mirrorless refractometer was studied and analyzed using the quasi-Gaussian beam approach. The Fresnel equation for reflectivity at the interface between two mediums with different refractive indices was used to calculate the directional reflectivity, R. Various liquid samples from 1.3325 to 1.4657 refractive indices units were used. Experimentally, a fiber bundle probe with a concentric configuration of 16 receiving fibers and a single transmitting fiber was employed to verify the developed models. The sensor performance in term of sensitivity, linear range, and resolution, were analyzed and calculated. It has been shown that the developed theoretical models are capable of providing quantitative guidance of the output of the sensor with high accuracy. The highest resolution of the sensor was 4.39  ×  10-3 refractive indices units, obtained by correlating the peak voltage along the refractive index. The resolution is sufficient for determining the specific refractive index increment of most polymer solutions, certain proteins, and also in monitoring bacterial growth. The accuracy, simplicity, and effectiveness of the proposed sensor over a long period of time while having non-contact measurements reflect a good potential for commercialization.

  8. Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.

    PubMed

    Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C

    2017-10-01

    The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).

  9. Upwind schemes and bifurcating solutions in real gas computations

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1992-01-01

    The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.

  10. A calibration rig for multi-component internal strain gauge balance using the new design-of-experiment (DOE) approach

    NASA Astrophysics Data System (ADS)

    Nouri, N. M.; Mostafapour, K.; Kamran, M.

    2018-02-01

    In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.

  11. Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo

    DOE PAGES

    White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; ...

    2015-07-07

    Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less

  12. Amphiphilic nanoparticles suppress droplet break-up in a concentrated emulsion flowing through a narrow constriction

    PubMed Central

    Gai, Ya; Kim, Minkyu; Pan, Ming; Tang, Sindy K. Y.

    2017-01-01

    This paper describes the break-up behavior of a concentrated emulsion comprising drops stabilized by amphiphilic silica nanoparticles flowing in a tapered microchannel. Such geometry is often used in serial droplet interrogation and sorting processes in droplet microfluidics applications. When exposed to high viscous stresses, drops can undergo break-up and compromise their physical integrity. As these drops are used as micro-reactors, such compromise leads to a loss in the accuracy of droplet-based assays. Here, we show droplet break-up is suppressed by replacing the fluoro-surfactant similar to the one commonly used in current droplet microfluidics applications with amphiphilic nanoparticles as droplet stabilizer. We identify parameters that influence the break-up of these drops and demonstrate that break-up probability increases with increasing capillary number and confinement, decreasing nanoparticle size, and is insensitive to viscosity ratio within the range tested. Practically, our results reveal two key advantages of nanoparticles with direct applications to droplet microfluidics. First, replacing surfactants with nanoparticles suppresses break-up and increases the throughput of the serial interrogation process to 3 times higher than that in surfactant system under similar flow conditions. Second, the insensitivity of break-up to droplet viscosity makes it possible to process samples having different composition and viscosities without having to change the channel and droplet geometry in order to maintain the same degree of break-up and corresponding assay accuracy. PMID:28652887

  13. Thoracic outlet syndrome: a controversial clinical condition. Part 1: anatomy, and clinical examination/diagnosis

    PubMed Central

    Hooper, Troy L; Denton, Jeff; McGalliard, Michael K; Brismée, Jean-Michel; Sizer, Phillip S

    2010-01-01

    Thoracic outlet syndrome (TOS) is a frequently overlooked peripheral nerve compression or tension event that creates difficulties for the clinician regarding diagnosis and management. Investigators have categorized this condition as vascular versus neurogenic, where vascular TOS can be subcategorized as either arterial or venous and neurogenic TOS can subcategorized as either true or disputed. The thoracic outlet anatomical container presents with several key regional components, each capable of compromising the neurovascular structures coursing within. Bony and soft tissue abnormalities, along with mechanical dysfunctions, may contribute to neurovascular compromise. Diagnosing TOS can be challenging because the symptoms vary greatly amongst patients with the disorder, thus lending to other conditions including a double crush syndrome. A careful history and thorough clinical examination are the most important components in establishing the diagnosis of TOS. Specific clinical tests, whose accuracy has been documented, can be used to support a clinical diagnosis, especially when a cluster of positive tests are witnessed. PMID:21655389

  14. Subliminal food images compromise superior working memory performance in women with restricting anorexia nervosa.

    PubMed

    Brooks, Samantha J; O'Daly, Owen G; Uher, Rudolf; Schiöth, Helgi B; Treasure, Janet; Campbell, Iain C

    2012-06-01

    Prefrontal cortex (PFC) is dysregulated in women with restricting anorexia nervosa (RAN). It is not known whether appetitive non-conscious stimuli bias cognitive responses in those with RAN. Thirteen women with RAN and 20 healthy controls (HC) completed a dorsolateral PFC (DLPFC) working memory task and an anterior cingulate cortex (ACC) conflict task, while masked subliminal food, aversive and neutral images were presented. During the DLPFC task, accuracy was higher in the RAN compared to the HC group, but superior performance was compromised when subliminal food stimuli were presented: errors positively correlated with self-reported trait anxiety in the RAN group. These effects were not observed in the ACC task. Appetitive activation is intact and anxiogenic in women with RAN, and non-consciously interacts with working memory processes associated with the DLPFC. This interaction mechanism may underlie cognitive inhibition of appetitive processes that are anxiety inducing, in people with AN. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  16. Measurement of surface tension by sessile drop tensiometer with superoleophobic surface

    NASA Astrophysics Data System (ADS)

    Kwak, Wonshik; Park, Jun Kwon; Yoon, Jinsung; Lee, Sanghyun; Hwang, Woonbong

    2018-03-01

    A sessile drop tensiometer provides a simple and efficient method of determining the surface tension of various liquids. The technique involves obtaining the shape of an axisymmetric liquid droplet and iterative fitting of the Young-Laplace equation, which balances the gravitational deformation of the drop. Since the advent of high quality digital cameras and desktop computers, this process has been automated with precision. However, despite its appealing simplicity, there are complications and limitations in a sessile drop tensiometer, i.e., it must dispense spherical droplets with low surface tension. We propose a method of measuring surface tension using a sessile drop tensiometer with a superoleophobic surface fabricated by acidic etching and anodization for liquids with low surface tension and investigate the accuracy of the measurement by changing the wettability of the measuring plate surface.

  17. A Robust Static Headspace GC-FID Method to Detect and Quantify Formaldehyde Impurity in Pharmaceutical Excipients

    PubMed Central

    Al-Khayat, Mohammad Ammar; Karabet, Francois; Al-Mardini, Mohammad Amer

    2018-01-01

    Formaldehyde is a highly reactive impurity that can be found in many pharmaceutical excipients. Trace levels of this impurity may affect drug product stability, safety, efficacy, and performance. A static headspace gas chromatographic method was developed and validated to determine formaldehyde in pharmaceutical excipients after an effective derivatization procedure using acidified ethanol. Diethoxymethane, the derivative of formaldehyde, was then directly analyzed by GC-FID. Despite the simplicity of the developed method, however, it is characterized by its specificity, accuracy, and precision. The limits of detection and quantification of formaldehyde in the samples were of 2.44 and 8.12 µg/g, respectively. This method is characterized by using simple and economic GC-FID technique instead of MS detection, and it is successfully used to analyze formaldehyde in commonly used pharmaceutical excipients. PMID:29686930

  18. Photoacoustic measurement of refractive index of dye solutions and myoglobin for biosensing applications

    PubMed Central

    Goldschmidt, Benjamin S.; Mehta, Smit; Mosley, Jeff; Walter, Chris; Whiteside, Paul J. D.; Hunt, Heather K.; Viator, John A.

    2013-01-01

    Current methods of determining the refractive index of chemicals and materials, such as ellipsometry and reflectometry, are limited by their inability to analyze highly absorbing or highly transparent materials, as well as the required prior knowledge of the sample thickness and estimated refractive index. Here, we present a method of determining the refractive index of solutions using the photoacoustic effect. We show that a photoacoustic refractometer can analyze highly absorbing dye samples to within 0.006 refractive index units of a handheld optical refractometer. Further, we use myoglobin, an early non-invasive biomarker for malignant hyperthermia, as a proof of concept that this technique is applicable for use as a medical diagnostic. Comparison of the speed, cost, simplicity, and accuracy of the techniques shows that this photoacoustic method is well-suited for optically complex systems. PMID:24298407

  19. Coarse graining flow of spin foam intertwiners

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Schnetter, Erik; Seth, Cameron J.; Steinhaus, Sebastian

    2016-12-01

    Simplicity constraints play a crucial role in the construction of spin foam models, yet their effective behavior on larger scales is scarcely explored. In this article we introduce intertwiner and spin net models for the quantum group SU (2 )k×SU (2 )k, which implement the simplicity constraints analogous to four-dimensional Euclidean spin foam models, namely the Barrett-Crane (BC) and the Engle-Pereira-Rovelli-Livine/Freidel-Krasnov (EPRL/FK) model. These models are numerically coarse grained via tensor network renormalization, allowing us to trace the flow of simplicity constraints to larger scales. In order to perform these simulations we have substantially adapted tensor network algorithms, which we discuss in detail as they can be of use in other contexts. The BC and the EPRL/FK model behave very differently under coarse graining: While the unique BC intertwiner model is a fixed point and therefore constitutes a two-dimensional topological phase, BC spin net models flow away from the initial simplicity constraints and converge to several different topological phases. Most of these phases correspond to decoupling spin foam vertices; however we find also a new phase in which this is not the case, and in which a nontrivial version of the simplicity constraints holds. The coarse graining flow of the BC spin net models indicates furthermore that the transitions between these phases are not of second order. The EPRL/FK model by contrast reveals a far more intricate and complex dynamics. We observe an immediate flow away from the original simplicity constraints; however, with the truncation employed here, the models generically do not converge to a fixed point. The results show that the imposition of simplicity constraints can indeed lead to interesting and also very complex dynamics. Thus we need to further develop coarse graining tools to efficiently study the large scale behavior of spin foam models, in particular for the EPRL/FK model.

  20. Polyethylene glycol versus dual sugar assay for gastrointestinal permeability analysis: is it time to choose?

    PubMed Central

    van Wijck, Kim; Bessems, Babs AFM; van Eijk, Hans MH; Buurman, Wim A; Dejong, Cornelis HC; Lenaerts, Kaatje

    2012-01-01

    Background Increased intestinal permeability is an important measure of disease activity and prognosis. Currently, many permeability tests are available and no consensus has been reached as to which test is most suitable. The aim of this study was to compare urinary probe excretion and accuracy of a polyethylene glycol (PEG) assay and dual sugar assay in a double-blinded crossover study to evaluate probe excretion and the accuracy of both tests. Methods Gastrointestinal permeability was measured in nine volunteers using PEG 400, PEG 1500, and PEG 3350 or lactulose-rhamnose. On 4 separate days, permeability was analyzed after oral intake of placebo or indomethacin, a drug known to increase intestinal permeability. Plasma intestinal fatty acid binding protein and calprotectin levels were determined to verify compromised intestinal integrity after indomethacin consumption. Urinary samples were collected at baseline, hourly up to 5 hours after probe intake, and between 5 and 24 hours. Urinary excretion of PEG and sugars was determined using high-pressure liquid chromatography-evaporative light scattering detection and liquid chromatography-mass spectrometry, respectively. Results Intake of indomethacin increased plasma intestinal fatty acid-binding protein and calprotectin levels, reflecting loss of intestinal integrity and inflammation. In this state of indomethacin-induced gastrointestinal compromise, urinary excretion of the three PEG probes and lactulose increased compared with placebo. Urinary PEG 400 excretion, the PEG 3350/PEG 400 ratio, and the lactulose/rhamnose ratio could accurately detect indomethacin-induced increases in gastrointestinal permeability, especially within 2 hours of probe intake. Conclusion Hourly urinary excretion and diagnostic accuracy of PEG and sugar probes show high concordance for detection of indomethacin-induced increases in gastrointestinal permeability. This comparative study improves our knowledge of permeability analysis in man by providing a clear overview of both tests and demonstrates equivalent performance in the current setting. PMID:22888267

  1. Rapid jetting status inspection and accurate droplet volume measurement for a piezo drop-on-demand inkjet print head using a scanning mirror for display applications

    NASA Astrophysics Data System (ADS)

    Shin, Dong-Youn; Kim, Minsung

    2017-02-01

    Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μ m for the fictitious droplets of 50 μ m in diameter and -1.2 ± 0.3 μ m for the fictitious droplets of 30 μ m in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μ m . When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μ m at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.

  2. Rapid jetting status inspection and accurate droplet volume measurement for a piezo drop-on-demand inkjet print head using a scanning mirror for display applications.

    PubMed

    Shin, Dong-Youn; Kim, Minsung

    2017-02-01

    Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μm for the fictitious droplets of 50 μm in diameter and -1.2 ± 0.3 μm for the fictitious droplets of 30 μm in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μm. When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μm at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.

  3. A binary method for simple and accurate two-dimensional cursor control from EEG with minimal subject training.

    PubMed

    Kayagil, Turan A; Bai, Ou; Henriquez, Craig S; Lin, Peter; Furlani, Stephen J; Vorbach, Sherry; Hallett, Mark

    2009-05-06

    Brain-computer interfaces (BCI) use electroencephalography (EEG) to interpret user intention and control an output device accordingly. We describe a novel BCI method to use a signal from five EEG channels (comprising one primary channel with four additional channels used to calculate its Laplacian derivation) to provide two-dimensional (2-D) control of a cursor on a computer screen, with simple threshold-based binary classification of band power readings taken over pre-defined time windows during subject hand movement. We tested the paradigm with four healthy subjects, none of whom had prior BCI experience. Each subject played a game wherein he or she attempted to move a cursor to a target within a grid while avoiding a trap. We also present supplementary results including one healthy subject using motor imagery, one primary lateral sclerosis (PLS) patient, and one healthy subject using a single EEG channel without Laplacian derivation. For the four healthy subjects using real hand movement, the system provided accurate cursor control with little or no required user training. The average accuracy of the cursor movement was 86.1% (SD 9.8%), which is significantly better than chance (p = 0.0015). The best subject achieved a control accuracy of 96%, with only one incorrect bit classification out of 47. The supplementary results showed that control can be achieved under the respective experimental conditions, but with reduced accuracy. The binary method provides naïve subjects with real-time control of a cursor in 2-D using dichotomous classification of synchronous EEG band power readings from a small number of channels during hand movement. The primary strengths of our method are simplicity of hardware and software, and high accuracy when used by untrained subjects.

  4. Prediction of arterial oxygen partial pressure after changes in FIO₂: validation and clinical application of a novel formula.

    PubMed

    Al-Otaibi, H M; Hardman, J G

    2011-11-01

    Existing methods allow prediction of Pa(O₂) during adjustment of Fi(O₂). However, these are cumbersome and lack sufficient accuracy for use in the clinical setting. The present studies aim to extend the validity of a novel formula designed to predict Pa(O₂) during adjustment of Fi(O₂) and to compare it with the current methods. Sixty-seven new data sets were collected from 46 randomly selected, mechanically ventilated patients. Each data set consisted of two subsets (before and 20 min after Fi(O₂) adjustment) and contained ventilator settings, pH, and arterial blood gas values. We compared the accuracy of Pa(O₂) prediction using a new formula (which utilizes only the pre-adjustment Pa(O₂) and pre- and post-adjustment Fi(O₂) with prediction using assumptions of constant Pa(O₂)/Fi(O₂) or constant Pa(O₂)/Pa(O₂). Subsequently, 20 clinicians predicted Pa(O₂) using the new formula and using Nunn's isoshunt diagram. The accuracy of the clinician's predictions was examined. The 95% limits of agreement (LA(95%)) between predicted and measured Pa(O₂) in the patient group were: new formula 0.11 (2.0) kPa, Pa(O₂)/Fi(O₂) -1.9 (4.4) kPa, and Pa(O₂)/Pa(O₂) -1.0 (3.6) kPa. The LA(95%) of clinicians' predictions of Pa(O₂) were 0.56 (3.6) kPa (new formula) and -2.7 (6.4) kPa (isoshunt diagram). The new formula's prediction of changes in Pa(O₂) is acceptably accurate and reliable and better than any other existing method. Its use by clinicians appears to improve accuracy over the most popular existing method. The simplicity of the new method may allow its regular use in the critical care setting.

  5. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).

  6. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  7. The Diagnostic Accuracy of Cytology for the Diagnosis of Hepatobiliary and Pancreatic Cancers.

    PubMed

    Al-Hajeili, Marwan; Alqassas, Maryam; Alomran, Astabraq; Batarfi, Bashaer; Basunaid, Bashaer; Alshail, Reem; Alaydarous, Shahad; Bokhary, Rana; Mosli, Mahmoud

    2018-06-13

    Although cytology testing is considered a valuable method to diagnose tumors that are difficult to access such as hepato-biliary-pancreatic (HBP) malignancies, its diagnostic accuracy remains unclear. We therefore aimed to investigate the diagnostic accuracy of cytology testing for HBP tumors. We performed a retrospective study of all cytology samples that were used to confirm radiologically detected HBP tumors between 2002 and 2016. The cytology techniques used in our center included fine needle aspiration (FNA), brush cytology, and aspiration of bile. Sensitivity, specificity, positive and negative predictive values, and likelihood ratios were calculated in comparison to histological confirmation. From a total of 133 medical records, we calculated an overall sensitivity of 76%, specificity of 74%, a negative likelihood ratio of 0.30, and a positive likelihood ratio of 2.9. Cytology was more accurate in diagnosing lesions of the liver (sensitivity 79%, specificity 57%) and biliary tree (sensitivity 100%, specificity 50%) compared to pancreatic (sensitivity 60%, specificity 83%) and gallbladder lesions (sensitivity 50%, specificity 85%). Cytology was more accurate in detecting primary cancers (sensitivity 77%, specificity 73%) when compared to metastatic cancers (sensitivity 73%, specificity 100%). FNA was the most frequently used cytological technique to diagnose HBP lesions (sensitivity 78.8%). Cytological testing is efficient in diagnosing HBP cancers, especially for hepatobiliary tumors. Given its relative simplicity, cost-effectiveness, and paucity of alternative diagnostic methods, cytology should still be considered as a first-line tool for diagnosing HBP malignancies. © 2018 S. Karger AG, Basel.

  8. Comparison of Adjacency and Distance-Based Approaches for Spatial Analysis of Multimodal Traffic Crash Data

    NASA Astrophysics Data System (ADS)

    Gill, G.; Sakrani, T.; Cheng, W.; Zhou, J.

    2017-09-01

    Many studies have utilized the spatial correlations among traffic crash data to develop crash prediction models with the aim to investigate the influential factors or predict crash counts at different sites. The spatial correlation have been observed to account for heterogeneity in different forms of weight matrices which improves the estimation performance of models. But very rarely have the weight matrices been compared for the prediction accuracy for estimation of crash counts. This study was targeted at the comparison of two different approaches for modelling the spatial correlations among crash data at macro-level (County). Multivariate Full Bayesian crash prediction models were developed using Decay-50 (distance-based) and Queen-1 (adjacency-based) weight matrices for simultaneous estimation crash counts of four different modes: vehicle, motorcycle, bike, and pedestrian. The goodness-of-fit and different criteria for accuracy at prediction of crash count reveled the superiority of Decay-50 over Queen-1. Decay-50 was essentially different from Queen-1 with the selection of neighbors and more robust spatial weight structure which rendered the flexibility to accommodate the spatially correlated crash data. The consistently better performance of Decay-50 at prediction accuracy further bolstered its superiority. Although the data collection efforts to gather centroid distance among counties for Decay-50 may appear to be a downside, but the model has a significant edge to fit the crash data without losing the simplicity of computation of estimated crash count.

  9. EMSAR: estimation of transcript abundance from RNA-seq data by mappability-based segmentation and reclustering.

    PubMed

    Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J

    2015-09-03

    RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.

  10. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  11. Deformation-induced speckle-pattern evolution and feasibility of correlational speckle tracking in optical coherence elastography.

    PubMed

    Zaitsev, Vladimir Y; Matveyev, Alexandr L; Matveev, Lev A; Gelikonov, Grigory V; Gelikonov, Valentin M; Vitkin, Alex

    2015-07-01

    Feasibility of speckle tracking in optical coherence tomography (OCT) based on digital image correlation (DIC) is discussed in the context of elastography problems. Specifics of applying DIC methods to OCT, compared to processing of photographic images in mechanical engineering applications, are emphasized and main complications are pointed out. Analytical arguments are augmented by accurate numerical simulations of OCT speckle patterns. In contrast to DIC processing for displacement and strain estimation in photographic images, the accuracy of correlational speckle tracking in deformed OCT images is strongly affected by the coherent nature of speckles, for which strain-induced complications of speckle “blinking” and “boiling” are typical. The tracking accuracy is further compromised by the usually more pronounced pixelated structure of OCT scans compared with digital photographic images in classical DIC applications. Processing of complex-valued OCT data (comprising both amplitude and phase) compared to intensity-only scans mitigates these deleterious effects to some degree. Criteria of the attainable speckle tracking accuracy and its dependence on the key OCT system parameters are established.

  12. Non-conforming finite-element formulation for cardiac electrophysiology: an effective approach to reduce the computation time of heart simulations without compromising accuracy

    NASA Astrophysics Data System (ADS)

    Hurtado, Daniel E.; Rojas, Guillermo

    2018-04-01

    Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.

  13. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    PubMed

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  14. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  15. Methods to attack or defend the professional integrity and competency of infrared thermographers and their work; what every attorney and infrared thermographer needs to know before going into a lawsuit

    NASA Astrophysics Data System (ADS)

    Colbert, Fred

    2013-05-01

    There has been a significant increase in the number of in-house Infrared Thermographic Predictive Maintenance programs for Electrical/Mechanical inspections as compared to out-sourced programs using hired consultants. In addition, the number of infrared consulting services companies offering out-sourced programs has also has grown exponentially. These market segments include: Building Envelope (commercial and residential), Refractory, Boiler Evaluations, etc... These surges are driven by two main factors: 1. The low cost of investment in the equipment (the cost of cameras and peripherals continues to decline). 2. Novel marketing campaigns by the camera manufacturers who are looking to sell more cameras into an otherwise saturated market. The key characteristics of these campaigns are to over simplify the applications and understate the significances of technical training, specific skills and experience that's needed to obtain the risk-lowering information that a facility manager needs. These camera selling campaigns focuses on the simplicity of taking a thermogram, but ignores the critical factors of what it takes to actually perform and manage a creditable, valid IR program, which in-turn expose everyone to tremendous liability. As the In-house vs. Out-sourced consulting services compete for market share head to head with each other in a constricted market space, the price for out-sourced/consulting services drops to try to compete on price for more market share. The consequences of this approach are, something must be compromised to be able to stay competitive from a price point, and that compromise is the knowledge, technical skills and experience of the thermographer. This also ends up being reflected back into the skill sets of the in-house thermographer as well. This over simplification of the skill and experience is producing the "Perfect Storm" for Infrared Thermography, for both in-house and out-sourced programs.

  16. Lessons learned: Infrastructure development and financial management for large, publically funded, international trials

    PubMed Central

    Larson, Gregg S; Carey, Cate; Grarup, Jesper; Hudson, Fleur; Sachi, Karen; Vjecha, Michael J; Gordin, Fred

    2015-01-01

    Background/Aims Randomized clinical trials are widely recognized as essential to address world-wide clinical and public health research questions. However, for many conditions, their size and duration can overwhelm available public and private resources. To remain competitive in international research settings, advocates and practitioners of clinical trials must implement practices that reduce their cost. We identify approaches and practices for large, publicly-funded, international trials that reduce cost without compromising data integrity, and recommend an approach to cost reporting that permits comparison of clinical trials. Methods We describe the organizational and financial characteristics of INSIGHT, an infectious disease research network that conducts multiple, large, long-term, international trials, and examine challenges associated with simple and streamlined governance and an infrastructure and financial management model that is based on performance, transparency, and accountability. Results It is possible to reduce costs of participant follow-up and not compromise clinical trial quality or integrity. The INSIGHT network has successfully completed four large HIV trials using cost-efficient practices that have not adversely affected investigator enthusiasm, accrual rates, loss-to-follow-up, adherence to the protocol, and completion of data collection. This experience is relevant to the conduct of large, publically funded trials in other disease areas, particularly trials dependent on international collaborations. Conclusion New approaches, or creative adaption of traditional clinical trial infrastructure and financial management tools, can render large, international clinical trials more cost-efficient by emphasizing structural simplicity; minimal up-front costs; payments for performance; and uniform algorithms and fees-for-service, irrespective of location. However, challenges remain. They include institutional resistance to financial change, growing trial complexity, and the difficulty of sustaining network infrastructure absent stable research work. There is also a need for more central monitoring, improved and harmonized regulations, and a widely-applied metric for measuring and comparing cost efficiency in clinical trials. ClinicalTrials.gov is recommended as a location where standardized trial cost information could be made publicly accessible. PMID:26908541

  17. Crustal layering, simplicity, and the oil industry: The alteration of an epistemic paradigm by a commercial environment

    NASA Astrophysics Data System (ADS)

    Anduaga, Aitor

    This paper proposes that the gradual alteration of the predominant epistemic paradigm in crustal seismology in the interwar period-namely, simplicity-came about because of the strong influence of a particular commercial environment, i.e. the oil industry. I begin by demonstrating the interwar predominance of Jeffreys' 'simplicity postulate' and his probabilistic epistemology, highlighting the espousal by several seismologists (Bullen, Stoneley, Byerly), whose crustal models drew on mathematical idealisations. Next, I demonstrate that the renunciation of simplicity in the 1930s came about too quickly, and, above all, too heterodoxically to have been the result of new geological evidence. Rather, I argue, the paradigm shift among seismologists was a result of the significant rise in seismic exploration generated by the oil industry. Driven by market demands, American petroleum companies pioneered new technologies, organised research initiatives, and trained young geophysicists who, through the fusion of experimentalism and field experience, brought about fundamental progress in earthquake seismology. Remarkably, historians of science have almost entirely failed to recognise the interwar primacy of the simplicity paradigm as well as its subsequent renunciation. More importantly, they have failed to acknowledge the role the oil industry played in contributing to this renunciation and to the development of new paradigms in seismology.

  18. Linearization of the bradford protein assay.

    PubMed

    Ernst, Orna; Zor, Tsaffrir

    2010-04-12

    Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.

  19. On the correct use of stepped-sine excitations for the measurement of time-varying bioimpedance.

    PubMed

    Louarroudi, E; Sanchez, B

    2017-02-01

    When a linear time-varying (LTV) bioimpedance is measured using stepped-sine excitations, a compromise must be made: the temporal distortions affecting the data depend on the experimental time, which in turn sets the data accuracy and limits the temporal bandwidth of the system that needs to be measured. Here, the experimental time required to measure linear time-invariant bioimpedance with a specified accuracy is analyzed for different stepped-sine excitation setups. We provide simple equations that allow the reader to know whether LTV bioimpedance can be measured through repeated time- invariant stepped-sine experiments. Bioimpedance technology is on the rise thanks to a plethora of healthcare monitoring applications. The results presented can help to avoid distortions in the data while measuring accurately non-stationary physiological phenomena. The impact of the work presented is broad, including the potential of enhancing bioimpedance studies and healthcare devices using bioimpedance technology.

  20. Green Engineering Principle #4 Maximize Efficiency

    EPA Science Inventory

    As one reads the twelve principles of Green Engineering, there is one message that stands out and becomes ever increasingly more evident with each principle. Moreover, that message is simplicity! It is simplicity that will allow us, as a society, to become more sustainable.Althou...

  1. Accuracy of the Lifebox pulse oximeter during hypoxia in healthy volunteers.

    PubMed

    Dubowitz, G; Breyer, K; Lipnick, M; Sall, J W; Feiner, J; Ikeda, K; MacLeod, D B; Bickler, P E

    2013-12-01

    Pulse oximetry is a standard of care during anaesthesia in high-income countries. However, 70% of operating environments in low- and middle-income countries have no pulse oximeter. The 'Lifebox' oximetry project set out to bridge this gap with an inexpensive oximeter meeting CE (European Conformity) and ISO (International Organization for Standardization) standards. To date, there are no performance-specific accuracy data on this instrument. The aim of this study was to establish whether the Lifebox pulse oximeter provides clinically reliable haemoglobin oxygen saturation (Sp O2 ) readings meeting USA Food and Drug Administration 510(k) standards. Using healthy volunteers, inspired oxygen fraction was adjusted to produce arterial haemoglobin oxygen saturation (Sa O2 ) readings between 71% and 100% measured with a multi-wavelength oximeter. Lifebox accuracy was expressed using bias (Sp O2 - Sa O2 ), precision (SD of the bias) and the root mean square error (Arms). Simultaneous readings of Sa O2 and Sp O2 in 57 subjects showed a mean (SD) bias of -0.41% (2.28%) and Arms 2.31%. The Lifebox pulse oximeter meets current USA Food and Drug Administration standards for accuracy, thus representing an inexpensive solution for patient monitoring without compromising standards. © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  2. Displaying chest X-ray by beamer or monitor: comparison of diagnostic accuracy for subtle abnormalities.

    PubMed

    Kuiper, L M; Thijs, A; Smulders, Y M

    2012-01-01

    The advent of beamer projection of radiological images raises the issue of whether such projection compromises diagnostic accuracy. The purpose of this study was to evaluate whether beamer projection of chest X-rays is inferior to monitor display. We selected 53 chest X-rays with subtle abnormalities and 15 normal X-rays. The images were independently judged by a senior radiologist and a senior pulmonologist with a state-of-art computer monitor. We used their unanimous or consensus judgment as the reference test. Subsequently, four observers (one senior pulmonologist, one senior radiologist and one resident from each speciality) judged these X-rays on a standard clinical computer monitor and with beamer projection. We compared the number of correct results for each method. Overall, the sensitivity and specificity did not differ between monitor and beamer projection. Separate analyses in senior and junior examiners suggested that senior examiners had a moderate loss of diagnostic accuracy (8% lower sensitivity, pp<0.05, and 6% lower specificity, p=ns) associated with the use of beamer projection, whereas juniors showed similar performance on both imaging modalities. These initial data suggest that beamer projection may be associated with a small loss of diagnostic accuracy in specific subgroups of physicians. This finding illustrates the need for more extensive studies.

  3. Quantitative comparison of OSEM and penalized likelihood image reconstruction using relative difference penalties for clinical PET

    NASA Astrophysics Data System (ADS)

    Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.

    2015-08-01

    Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.

  4. Efficient use of unlabeled data for protein sequence classification: a comparative study.

    PubMed

    Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir

    2009-04-29

    Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags-the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably.

  5. Investigations of fluid-strain interaction using Plate Boundary Observatory borehole data

    NASA Astrophysics Data System (ADS)

    Boyd, Jeffrey Michael

    Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.

  6. DTU BCI speller: an SSVEP-based spelling system with dictionary support.

    PubMed

    Vilic, Adnan; Kjaer, Troels W; Thomsen, Carsten E; Puthusserypady, S; Sorensen, Helge B D

    2013-01-01

    In this paper, a new brain computer interface (BCI) speller, named DTU BCI speller, is introduced. It is based on the steady-state visual evoked potential (SSVEP) and features dictionary support. The system focuses on simplicity and user friendliness by using a single electrode for the signal acquisition and displays stimuli on a liquid crystal display (LCD). Nine healthy subjects participated in writing full sentences after a five minutes introduction to the system, and obtained an information transfer rate (ITR) of 21.94 ± 15.63 bits/min. The average amount of characters written per minute (CPM) is 4.90 ± 3.84 with a best case of 8.74 CPM. All subjects reported systematically on different user friendliness measures, and the overall results indicated the potentials of the DTU BCI Speller system. For subjects with high classification accuracies, the introduced dictionary approach greatly reduced the time it took to write full sentences.

  7. Wave-propagation formulation of seismic response of multistory buildings

    USGS Publications Warehouse

    Safak, E.

    1999-01-01

    This paper presents a discrete-time wave-propagation method to calculate the seismic response of multistory buildings, founded on layered soil media and subjected to vertically propagating shear waves. Buildings are modeled as an extension of the layered soil media by considering each story as another layer in the wave-propagation path. The seismic response is expressed in terms of wave travel times between the layers and wave reflection and transmission coefficients at layer interfaces. The method accounts for the filtering effects of the concentrated foundation and floor masses. Compared with commonly used vibration formulation, the wave-propagation formulation provides several advantages, including simplicity, improved accuracy, better representation of damping, the ability to incorporate the soil layers under the foundation, and providing better tools for identification and damage detection from seismic records. Examples are presented to show the versatility and the superiority of the method.

  8. Design and modeling of flower like microring resonator

    NASA Astrophysics Data System (ADS)

    Razaghi, Mohammad; Laleh, Mohammad Sayfi

    2016-05-01

    This paper presents a novel multi-channel optical filter structure. The proposed design is based on using a set of microring resonators (MRRs) in new formation, named flower like arrangement. It is shown that instead of using 18 MRRs, by using only 5 MRRs in recommended formation, same filtering operation can be achieved. It is shown that with this structure, six filters and four integrated demultiplexers (DEMUXs) are obtained. The simplicity, extensibility and compactness of this structure make it usable in wavelength division multiplexing (WDM) networks. Filter's characteristics such as shape factor (SF), free spectral range (FSR) and stopband rejection ratio can be designed by adjusting microrings' radii and coupling coefficients. To model this structure, signal flow graph method (SFG) based on Mason's rule is used. The modeling method is discussed in depth. Furthermore, the accuracy and applicability of this method are verified through examples and comparison with other modeling schemes.

  9. Jedi training: playful evaluation of head-mounted augmented reality display systems

    NASA Astrophysics Data System (ADS)

    Ozbek, Christopher S.; Giesler, Bjorn; Dillmann, Ruediger

    2004-05-01

    A fundamental decision in building augmented reality (AR) systems is how to accomplish the combining of the real and virtual worlds. Nowadays this key-question boils down to the two alternatives video-see-through (VST) vs. optical-see-through (OST). Both systems have advantages and disadvantages in areas like production-simplicity, resolution, flexibility in composition strategies, field of view etc. To provide additional decision criteria for high dexterity, accuracy tasks and subjective user-acceptance a gaming environment was programmed that allowed good evaluation of hand-eye coordination, and that was inspired by the Star Wars movies. During an experimentation session with more than thirty participants a preference for optical-see-through glasses in conjunction with infra-red-tracking was found. Especially the high-computational demand for video-capture, processing and the resulting drop in frame rate emerged as a key-weakness of the VST-system.

  10. A novel finite element analysis of three-dimensional circular crack

    NASA Astrophysics Data System (ADS)

    Ping, X. C.; Wang, C. G.; Cheng, L. P.

    2018-06-01

    A novel singular element containing a part of the circular crack front is established to solve the singular stress fields of circular cracks by using the numerical series eigensolutions of singular stress fields. The element is derived from the Hellinger-Reissner variational principle and can be directly incorporated into existing 3D brick elements. The singular stress fields are determined as the system unknowns appearing as displacement nodal values. The numerical studies are conducted to demonstrate the simplicity of the proposed technique in handling fracture problems of circular cracks. The usage of the novel singular element can avoid mesh refinement near the crack front domain without loss of calculation accuracy and velocity of convergence. Compared with the conventional finite element methods and existing analytical methods, the present method is more suitable for dealing with complicated structures with a large number of elements.

  11. Using groundwater levels to estimate recharge

    USGS Publications Warehouse

    Healy, R.W.; Cook, P.G.

    2002-01-01

    Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.

  12. ELECTRONIC MULTIPLIER CIRCUIT

    DOEpatents

    Thomas, R.E.

    1959-08-25

    An electronic multiplier circuit is described in which an output voltage having an amplitude proportional to the product or quotient of the input signals is accomplished in a novel manner which facilitates simplicity of circuit construction and a high degree of accuracy in accomplishing the multiplying and dividing function. The circuit broadly comprises a multiplier tube in which the plate current is proportional to the voltage applied to a first control grid multiplied by the difference between voltage applied to a second control grid and the voltage applied to the first control grid. Means are provided to apply a first signal to be multiplied to the first control grid together with means for applying the sum of the first signal to be multiplied and a second signal to be multiplied to the second control grid whereby the plate current of the multiplier tube is proportional to the product of the first and second signals to be multiplied.

  13. [Nuclear techniques in nutrition: assessment of body fat and intake of human milk in breast-fed infants].

    PubMed

    Pallaro, Anabel; Tarducci, Gabriel

    2014-12-01

    The application of nuclear techniques in the area of nutrition is safe because they use stable isotopes. The deuterium dilution method is used in body composition and human milk intake analysis. It is a reference method for body fat and validates inexpensive tools because of its accuracy, simplicity of application in individuals and population and the background of its usefulness in adults and children as an evaluation tool in clinical and health programs. It is a non-invasive technique as it uses saliva, which facilitates the assessment in pediatric populations. Changes in body fat are associated with non-communicable diseases; moreover, normal weight individuals with high fat deposition were reported. Furthermore, this technique is the only accurate way to determine whether infants are exclusively breast-fed and validate conventional methods based on surveys to mothers.

  14. A Constant Pressure Bomb

    NASA Technical Reports Server (NTRS)

    Stevens, F W

    1924-01-01

    This report describes a new optical method of unusual simplicity and of good accuracy suitable to study the kinetics of gaseous reactions. The device is the complement of the spherical bomb of constant volume, and extends the applicability of the relationship, pv=rt for gaseous equilibrium conditions, to the use of both factors p and v. The method substitutes for the mechanical complications of a manometer placed at some distance from the seat of reaction the possibility of allowing the radiant effects of reaction to record themselves directly upon a sensitive film. It is possible the device may be of use in the study of the photoelectric effects of radiation. The method makes possible a greater precision in the measurement of normal flame velocities than was previously possible. An approximate analysis shows that the increase of pressure and density ahead of the flame is negligible until the velocity of the flame approaches that of sound.

  15. False HDAC Inhibition by Aurone Compound.

    PubMed

    Itoh, Yukihiro; Suzuki, Miki; Matsui, Taiji; Ota, Yosuke; Hui, Zi; Tsubaki, Kazunori; Suzuki, Takayoshi

    2016-01-01

    Fluorescence assays are useful tools for estimating enzymatic activity. Their simplicity and manageability make them suitable for screening enzyme inhibitors in drug discovery studies. However, researchers need to pay attention to compounds that show auto-fluorescence and quench fluorescence, because such compounds lower the accuracy of the fluorescence assay systems by producing false-positive or negative results. In this study, we found that aurone compound 7, which has been reported as a histone deacetylase (HDAC) inhibitor, gave false-positive results. Although compound 7 was identified by an in vitro HDAC fluorescence assay, it did not show HDAC inhibitory activity in a cell-based assay, leading us to suspect its in vitro HDAC inhibitory activity. As a result of verification experiments, we found that compound 7 interferes with the HDAC fluorescence assay by quenching the HDAC fluorescence signal. Our findings underscore the faults of fluorescence assays and call attention to careless interpretation.

  16. Forward flight of birds revisited. Part 1: aerodynamics and performance.

    PubMed

    Iosilevskii, G

    2014-10-01

    This paper is the first part of the two-part exposition, addressing performance and dynamic stability of birds. The aerodynamic model underlying the entire study is presented in this part. It exploits the simplicity of the lifting line approximation to furnish the forces and moments acting on a single wing in closed analytical forms. The accuracy of the model is corroborated by comparison with numerical simulations based on the vortex lattice method. Performance is studied both in tethered (as on a sting in a wind tunnel) and in free flights. Wing twist is identified as the main parameter affecting the flight performance-at high speeds, it improves efficiency, the rate of climb and the maximal level speed; at low speeds, it allows flying slower. It is demonstrated that, under most circumstances, the difference in performance between tethered and free flights is small.

  17. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  18. [Research on the measurement of flue-dust concentration in Vis, IR spectral region].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin

    2008-10-01

    In the measurement of flue-dust concentration based on the transmission method, the dependent model algorithm was used to invert the flue-dust concentration in the visible, infrared and visible-infrared spectral regions respectively. By the analysis and comparison of the accuracy, linearity and sensitivity of the inversion flue-dust concentration, the optimal spectral region was determined. Meanwhile, the influence of the water droplet with different size distribution and volume concentration was simulated, and a method was proposed which has advantages of simplicity, rapidity, and suitability for on line measurement. Simulation experiments illustrate that the flue-dust concentration can be inverted very well in the visible-infrared spectral region, and it is feasible to use the ratio of the constrained light extinction method to overcome the influence of water droplet. The inverse results all remain satisfactory when 2% stochastic noise is added to the value of the light extinction.

  19. A mixed-mode crack analysis of isotropic solids using conservation laws of elasticity

    NASA Technical Reports Server (NTRS)

    Yau, J. F.; Wang, S. S.; Corten, H. T.

    1980-01-01

    A simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems is presented. The analysis is formulated on the basis of conservation laws of elasticity and of fundamental relationships in fracture mechanics. The problem is reduced to the determination of mixed-mode stress-intensity factor solutions in terms of conservation integrals involving known auxiliary solutions. One of the salient features of the present analysis is that the stress-intensity solutions can be determined directly by using information extracted in the far field. Several examples with solutions available in the literature are solved to examine the accuracy and other characteristics of the current approach. This method is demonstrated to be superior in its numerical simplicity and computational efficiency to other approaches. Solutions of more complicated and practical engineering fracture problems dealing with the crack emanating from a circular hole are presented also to illustrate the capacity of this method

  20. Structure Assembly by a Heterogeneous Team of Robots Using State Estimation, Generalized Joints, and Mobile Parallel Manipulators

    NASA Technical Reports Server (NTRS)

    Komendera, Erik E.; Adhikari, Shaurav; Glassner, Samantha; Kishen, Ashwin; Quartaro, Amy

    2017-01-01

    Autonomous robotic assembly by mobile field robots has seen significant advances in recent decades, yet practicality remains elusive. Identified challenges include better use of state estimation to and reasoning with uncertainty, spreading out tasks to specialized robots, and implementing representative joining methods. This paper proposes replacing 1) self-correcting mechanical linkages with generalized joints for improved applicability, 2) assembly serial manipulators with parallel manipulators for higher precision and stability, and 3) all-in-one robots with a heterogeneous team of specialized robots for agent simplicity. This paper then describes a general assembly algorithm utilizing state estimation. Finally, these concepts are tested in the context of solar array assembly, requiring a team of robots to assemble, bond, and deploy a set of solar panel mockups to a backbone truss to an accuracy not built into the parts. This paper presents the results of these tests.

  1. Perfect transmission at oblique incidence by trigonal warping in graphene P-N junctions

    NASA Astrophysics Data System (ADS)

    Zhang, Shu-Hui; Yang, Wen

    2018-01-01

    We develop an analytical mode-matching technique for the tight-binding model to describe electron transport across graphene P-N junctions. This method shares the simplicity of the conventional mode-matching technique for the low-energy continuum model and the accuracy of the tight-binding model over a wide range of energies. It further reveals an interesting phenomenon on a sharp P-N junction: the disappearance of the well-known Klein tunneling (i.e., perfect transmission) at normal incidence and the appearance of perfect transmission at oblique incidence due to trigonal warping at energies beyond the linear Dirac regime. We show that this phenomenon arises from the conservation of a generalized pseudospin in the tight-binding model. We expect this effect to be experimentally observable in graphene and other Dirac fermions systems, such as the surface of three-dimensional topological insulators.

  2. Propagators for the Time-Dependent Kohn-Sham Equations: Multistep, Runge-Kutta, Exponential Runge-Kutta, and Commutator Free Magnus Methods.

    PubMed

    Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto

    2018-05-09

    We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.

  3. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    PubMed Central

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization. PMID:26380364

  4. Induction of Social Behavior in Zebrafish: Live Versus Computer Animated Fish as Stimuli

    PubMed Central

    Qin, Meiying; Wong, Albert; Seguin, Diane

    2014-01-01

    Abstract The zebrafish offers an excellent compromise between system complexity and practical simplicity and has been suggested as a translational research tool for the analysis of human brain disorders associated with abnormalities of social behavior. Unlike laboratory rodents zebrafish are diurnal, thus visual cues may be easily utilized in the analysis of their behavior and brain function. Visual cues, including the sight of conspecifics, have been employed to induce social behavior in zebrafish. However, the method of presentation of these cues and the question of whether computer animated images versus live stimulus fish have differential effects have not been systematically analyzed. Here, we compare the effects of five stimulus presentation types: live conspecifics in the experimental tank or outside the tank, playback of video-recorded live conspecifics, computer animated images of conspecifics presented by two software applications, the previously employed General Fish Animator, and a new application Zebrafish Presenter. We report that all stimuli were equally effective and induced a robust social response (shoaling) manifesting as reduced distance between stimulus and experimental fish. We conclude that presentation of live stimulus fish, or 3D images, is not required and 2D computer animated images are sufficient to induce robust and consistent social behavioral responses in zebrafish. PMID:24575942

  5. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  6. Simplified three-dimensional model provides anatomical insights in lizards' caudal autotomy as printed illustration.

    PubMed

    De Amorim, Joana D C G; Travnik, Isadora; De Sousa, Bernadete M

    2015-03-01

    Lizards' caudal autotomy is a complex and vastly employed antipredator mechanism, with thorough anatomic adaptations involved. Due to its diminished size and intricate structures, vertebral anatomy is hard to be clearly conveyed to students and researchers of other areas. Three-dimensional models are prodigious tools in unveiling anatomical nuances. Some of the techniques used to create them can produce irregular and complicated forms, which despite being very accurate, lack didactical uniformity and simplicity. Since both are considered fundamental characteristics for comprehension, a simplified model could be the key to improve learning. The model here presented depicts the caudal osteology of Tropidurus itambere, and was designed to be concise, in order to be easily assimilated, yet complete, not to compromise the informative aspect. The creation process requires only basic skills in manipulating polygons in 3D modeling softwares, in addition to the appropriate knowledge of the structure to be modeled. As reference for the modeling, we used microscopic observation and a photograph database of the caudal structures. This way, no advanced laboratory equipment was needed and all biological materials were preserved for future research. Therefore, we propose a wider usage of simplified 3D models both in the classroom and as illustrations for scientific publications.

  7. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  8. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    PubMed

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  9. Induction of social behavior in zebrafish: live versus computer animated fish as stimuli.

    PubMed

    Qin, Meiying; Wong, Albert; Seguin, Diane; Gerlai, Robert

    2014-06-01

    The zebrafish offers an excellent compromise between system complexity and practical simplicity and has been suggested as a translational research tool for the analysis of human brain disorders associated with abnormalities of social behavior. Unlike laboratory rodents zebrafish are diurnal, thus visual cues may be easily utilized in the analysis of their behavior and brain function. Visual cues, including the sight of conspecifics, have been employed to induce social behavior in zebrafish. However, the method of presentation of these cues and the question of whether computer animated images versus live stimulus fish have differential effects have not been systematically analyzed. Here, we compare the effects of five stimulus presentation types: live conspecifics in the experimental tank or outside the tank, playback of video-recorded live conspecifics, computer animated images of conspecifics presented by two software applications, the previously employed General Fish Animator, and a new application Zebrafish Presenter. We report that all stimuli were equally effective and induced a robust social response (shoaling) manifesting as reduced distance between stimulus and experimental fish. We conclude that presentation of live stimulus fish, or 3D images, is not required and 2D computer animated images are sufficient to induce robust and consistent social behavioral responses in zebrafish.

  10. Speed versus accuracy in decision-making ants: expediting politics and policy implementation.

    PubMed

    Franks, Nigel R; Dechaume-Moncharmont, François-Xavier; Hanmore, Emma; Reynolds, Jocelyn K

    2009-03-27

    Compromises between speed and accuracy are seemingly inevitable in decision-making when accuracy depends on time-consuming information gathering. In collective decision-making, such compromises are especially likely because information is shared to determine corporate policy. This political process will also take time. Speed-accuracy trade-offs occur among house-hunting rock ants, Temnothorax albipennis. A key aspect of their decision-making is quorum sensing in a potential new nest. Finding a sufficient number of nest-mates, i.e. a quorum threshold (QT), in a potential nest site indicates that many ants find it suitable. Quorum sensing collates information. However, the QT is also used as a switch, from recruitment of nest-mates to their new home by slow tandem running, to recruitment by carrying, which is three times faster. Although tandem running is slow, it effectively enables one successful ant to lead and teach another the route between the nests. Tandem running creates positive feedback; more and more ants are shown the way, as tandem followers become, in turn, tandem leaders. The resulting corps of trained ants can then quickly carry their nest-mates; but carried ants do not learn the route. Therefore, the QT seems to set both the amount of information gathered and the speed of the emigration. Low QTs might cause more errors and a slower emigration--the worst possible outcome. This possible paradox of quick decisions leading to slow implementation might be resolved if the ants could deploy another positive-feedback recruitment process when they have used a low QT. Reverse tandem runs occur after carrying has begun and lead ants back from the new nest to the old one. Here we show experimentally that reverse tandem runs can bring lost scouts into an active role in emigrations and can help to maintain high-speed emigrations. Thus, in rock ants, although quick decision-making and rapid implementation of choices are initially in opposition, a third recruitment method can restore rapid implementation after a snap decision. This work reveals a principle of widespread importance: the dynamics of collective decision-making (i.e. the politics) and the dynamics of policy implementation are sometimes intertwined, and only by analysing the mechanisms of both can we understand certain forms of adaptive organization.

  11. Precision atomic beam density characterization by diode laser absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxley, Paul; Wihbey, Joseph

    2016-09-15

    We provide experimental and theoretical details of a simple technique to determine absolute line-of-sight integrated atomic beam densities based on resonant laser absorption. In our experiments, a thermal lithium beam is chopped on and off while the frequency of a laser crossing the beam at right angles is scanned slowly across the resonance transition. A lock-in amplifier detects the laser absorption signal at the chop frequency from which the atomic density is determined. The accuracy of our experimental method is confirmed using the related technique of wavelength modulation spectroscopy. For beams which absorb of order 1% of the incident lasermore » light, our measurements allow the beam density to be determined to an accuracy better than 5% and with a precision of 3% on a time scale of order 1 s. Fractional absorptions of order 10{sup −5} are detectable on a one-minute time scale when we employ a double laser beam technique which limits laser intensity noise. For a lithium beam with a thickness of 9 mm, we have measured atomic densities as low as 5 × 10{sup 4} atoms cm{sup −3}. The simplicity of our technique and the details we provide should allow our method to be easily implemented in most atomic or molecular beam apparatuses.« less

  12. APPLICATION OF ISOTOPE ENCEPHALOGRAPHY AND ELECTROENCEPHALOSCOPY FOR LOCALIZATION OF BRAIN TUMOURS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamov, V.N.; Badmayev, C.N.; Bekhtereva, N.P.

    1959-10-31

    The problems of diagnosis and localization of brain tumors in some cases present many difficulities and make the neurosurgeon seek for additional methods of investigation. In such circumstances usage of the tracer technique in diagnostics is of considerable help, as it has obvious advantages compared with other methods of investigation, such as safety, painlessness, non-traumatism, absence of undesirable after effects, accuracy, and relative simplicity. The present communication is based on the results of clinical observations on 150 patients with verified brain tumors. Analyses of the data show that the accuracy of the brain tumor localizations vary, depending upon the depthmore » of the tumor site and conceniration of labelled material in the area of tumor growth. The diagnostic value of the method is doubtful in cases of tumors of posterior fossa, base of the brain, or the lesions of median line. The application of isotope encephalography is successfully supplemented by the new method of investigations, i.e., electroencephaloscopy, which allows the localization of deeply set tumors. Possibilities and limitations of the method are discussed. It is concluded that the isotope encephalography and electroencephaloscopy represent very valuable diagnostic methods which alongside with other auxiliary methods are widely used in diagnosis of brain tumors. (C.H.)« less

  13. Geometry control of long-span continuous girder concrete bridge during construction through finite element model updating

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi

    2016-04-01

    In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.

  14. Precision atomic beam density characterization by diode laser absorption spectroscopy.

    PubMed

    Oxley, Paul; Wihbey, Joseph

    2016-09-01

    We provide experimental and theoretical details of a simple technique to determine absolute line-of-sight integrated atomic beam densities based on resonant laser absorption. In our experiments, a thermal lithium beam is chopped on and off while the frequency of a laser crossing the beam at right angles is scanned slowly across the resonance transition. A lock-in amplifier detects the laser absorption signal at the chop frequency from which the atomic density is determined. The accuracy of our experimental method is confirmed using the related technique of wavelength modulation spectroscopy. For beams which absorb of order 1% of the incident laser light, our measurements allow the beam density to be determined to an accuracy better than 5% and with a precision of 3% on a time scale of order 1 s. Fractional absorptions of order 10 -5 are detectable on a one-minute time scale when we employ a double laser beam technique which limits laser intensity noise. For a lithium beam with a thickness of 9 mm, we have measured atomic densities as low as 5 × 10 4 atoms cm -3 . The simplicity of our technique and the details we provide should allow our method to be easily implemented in most atomic or molecular beam apparatuses.

  15. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics

    PubMed Central

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.

    2014-01-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672

  16. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics.

    PubMed

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A

    2014-06-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.

  17. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization

    PubMed Central

    Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros

    2017-01-01

    Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K-Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors. PMID:28394268

  18. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization.

    PubMed

    Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros

    2017-04-10

    Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K -Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors.

  19. Synthesis of molecular imprinted polymers for selective extraction of domperidone from human serum using high performance liquid chromatography with fluorescence detection.

    PubMed

    Salehi, Simin; Rasoul-Amini, Sara; Adib, Noushin; Shekarchi, Maryam

    2016-08-01

    In this study a novel method is described for selective quantization of domperidone in biological matrices applying molecular imprinted polymers (MIPs) as a sample clean up procedure using high performance liquid chromatography coupled with a fluorescence detector. MIPs were synthesized with chloroform as the porogen, ethylene glycol dimethacrylate as the crosslinker, methacrylic acid as the monomer, and domperidone as the template molecule. The new imprinted polymer was used as a molecular sorbent for separation of domperidone from serum. Molecular recognition properties, binding capacity and selectivity of MIPs were determined. The results demonstrated exceptional affinity for domperidone in biological fluids. The domperidone analytical method using MIPs was verified according to validation parameters, such as selectivity, linearity (5-80ng/mL, r(2)=0.9977), precision and accuracy (10-40ng/mL, intra-day=1.7-5.1%, inter-day=4.5-5.9%, and accuracy 89.07-98.9%).The limit of detection (LOD) and quantization (LOQ) of domperidone was 0.0279 and 0.092ng/mL, respectively. The simplicity and suitable validation parameters makes this a highly valuable selective bioequivalence method for domperidone analysis in human serum. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Design and experimental validation of novel 3D optical scanner with zoom lens unit

    NASA Astrophysics Data System (ADS)

    Huang, Jyun-Cheng; Liu, Chien-Sheng; Chiang, Pei-Ju; Hsu, Wei-Yan; Liu, Jian-Liang; Huang, Bai-Hao; Lin, Shao-Ru

    2017-10-01

    Optical scanners play a key role in many three-dimensional (3D) printing and CAD/CAM applications. However, existing optical scanners are generally designed to provide either a wide scanning area or a high 3D reconstruction accuracy from a lens with a fixed focal length. In the former case, the scanning area is increased at the expense of the reconstruction accuracy, while in the latter case, the reconstruction performance is improved at the expense of a more limited scanning range. In other words, existing optical scanners compromise between the scanning area and the reconstruction accuracy. Accordingly, the present study proposes a new scanning system including a zoom-lens unit, which combines both a wide scanning area and a high 3D reconstruction accuracy. In the proposed approach, the object is scanned initially under a suitable low-magnification setting for the object size (setting 1), resulting in a wide scanning area but a poor reconstruction resolution in complicated regions of the object. The complicated regions of the object are then rescanned under a high-magnification setting (setting 2) in order to improve the accuracy of the original reconstruction results. Finally, the models reconstructed after each scanning pass are combined to obtain the final reconstructed 3D shape of the object. The feasibility of the proposed method is demonstrated experimentally using a laboratory-built prototype. It is shown that the scanner has a high reconstruction accuracy over a large scanning area. In other words, the proposed optical scanner has significant potential for 3D engineering applications.

  1. Support-vector-machines-based multidimensional signal classification for fetal activity characterization

    NASA Astrophysics Data System (ADS)

    Ribes, S.; Voicu, I.; Girault, J. M.; Fournier, M.; Perrotin, F.; Tranquart, F.; Kouamé, D.

    2011-03-01

    Electronic fetal monitoring may be required during the whole pregnancy to closely monitor specific fetal and maternal disorders. Currently used methods suffer from many limitations and are not sufficient to evaluate fetal asphyxia. Fetal activity parameters such as movements, heart rate and associated parameters are essential indicators of the fetus well being, and no current device gives a simultaneous and sufficient estimation of all these parameters to evaluate the fetus well-being. We built for this purpose, a multi-transducer-multi-gate Doppler system and developed dedicated signal processing techniques for fetal activity parameter extraction in order to investigate fetus's asphyxia or well-being through fetal activity parameters. To reach this goal, this paper shows preliminary feasibility of separating normal and compromised fetuses using our system. To do so, data set consisting of two groups of fetal signals (normal and compromised) has been established and provided by physicians. From estimated parameters an instantaneous Manning-like score, referred to as ultrasonic score was introduced and was used together with movements, heart rate and associated parameters in a classification process using Support Vector Machines (SVM) method. The influence of the fetal activity parameters and the performance of the SVM were evaluated using the computation of sensibility, specificity, percentage of support vectors and total classification accuracy. We showed our ability to separate the data into two sets : normal fetuses and compromised fetuses and obtained an excellent matching with the clinical classification performed by physician.

  2. A Novel Approach to the Identification of Compromised Pulmonary Systems in Smokers by Exploiting Tidal Breathing Patterns.

    PubMed

    Rakshit, Raj; Khasnobish, Anwesha; Chowdhury, Arijit; Sinharay, Arijit; Pal, Arpan; Chakravarty, Tapas

    2018-04-25

    Smoking causes unalterable physiological abnormalities in the pulmonary system. This is emerging as a serious threat worldwide. Unlike spirometry, tidal breathing does not require subjects to undergo forceful breathing maneuvers and is progressing as a new direction towards pulmonary health assessment. The aim of the paper is to evaluate whether tidal breathing signatures can indicate deteriorating adult lung condition in an otherwise healthy person. If successful, such a system can be used as a pre-screening tool for all people before some of them need to undergo a thorough clinical checkup. This work presents a novel systematic approach to identify compromised pulmonary systems in smokers from acquired tidal breathing patterns. Tidal breathing patterns are acquired during restful breathing of adult participants. Thereafter, physiological attributes are extracted from the acquired tidal breathing signals. Finally, a unique classification approach of locally weighted learning with ridge regression (LWL-ridge) is implemented, which handles the subjective variations in tidal breathing data without performing feature normalization. The LWL-ridge classifier recognized compromised pulmonary systems in smokers with an average classification accuracy of 86.17% along with a sensitivity of 80% and a specificity of 92%. The implemented approach outperformed other variants of LWL as well as other standard classifiers and generated comparable results when applied on an external cohort. This end-to-end automated system is suitable for pre-screening people routinely for early detection of lung ailments as a preventive measure in an infrastructure-agnostic way.

  3. Replica-exchange Wang Landau sampling: pushing the limits of Monte Carlo simulations in materials sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus

    We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.

  4. Exploring machine-learning-based control plane intrusion detection techniques in software defined optical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Huibin; Wang, Yuqiao; Chen, Haoran; Zhao, Yongli; Zhang, Jie

    2017-12-01

    In software defined optical networks (SDON), the centralized control plane may encounter numerous intrusion threatens which compromise the security level of provisioned services. In this paper, the issue of control plane security is studied and two machine-learning-based control plane intrusion detection techniques are proposed for SDON with properly selected features such as bandwidth, route length, etc. We validate the feasibility and efficiency of the proposed techniques by simulations. Results show an accuracy of 83% for intrusion detection can be achieved with the proposed machine-learning-based control plane intrusion detection techniques.

  5. A new approach for cancelable iris recognition

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Sui, Yan; Zhou, Zhi; Du, Yingzi; Zou, Xukai

    2010-04-01

    The iris is a stable and reliable biometric for positive human identification. However, the traditional iris recognition scheme raises several privacy concerns. One's iris pattern is permanently bound with him and cannot be changed. Hence, once it is stolen, this biometric is lost forever as well as all the applications where this biometric is used. Thus, new methods are desirable to secure the original pattern and ensure its revocability and alternatives when compromised. In this paper, we propose a novel scheme which incorporates iris features, non-invertible transformation and data encryption to achieve "cancelability" and at the same time increases iris recognition accuracy.

  6. The Evaluation of a Noninvasive Respiratory Volume Monitor in Pediatric Patients Undergoing General Anesthesia.

    PubMed

    Gomez-Morad, Andrea D; Cravero, Joseph P; Harvey, Brian C; Bernier, Rachel; Halpin, Erin; Walsh, Brian; Nasr, Viviane G

    2017-12-01

    Pediatric patients following surgery are at risk for respiratory compromise such as hypoventilation and hypoxemia depending on their age, comorbidities, and type of surgery. Quantitative measurement of ventilation in nonintubated infants/children is a difficult and inexact undertaking. Current respiratory assessment in nonintubated patients relies on oximetry data, respiratory rate (RR) monitors, and subjective clinical assessment, but there is no objective measure of respiratory parameters that could be utilized to predict early respiratory compromise. New advances in technology and digital signal processing have led to the development of an impedance-based respiratory volume monitor (RVM, ExSpiron, Respiratory Motion, Inc, Waltham, MA). The RVM has been shown to provide accurate real-time, continuous, noninvasive measurements of tidal volume (TV), minute ventilation (MV), and RR in adult patients.In this prospective observational study, our primary aim was to determine whether the RVM accurately measures TV, RR, and MV in pediatric patients. A total of 72 pediatric patients (27 females, 45 males), ASA I to III, undergoing general anesthesia with endotracheal intubation were enrolled. After endotracheal intubation, continuous data of MV, TV, and RR were recorded from the RVM and an in-line monitoring spirometer (NM3 monitor, Phillips Healthcare). RVM and NM3 measurements of MV, TV, and RR were compared during a 10-minute period prior to the incision ("Presurgery") and a 10-minute period after the end of surgery ("Postsurgery"). Relative errors were calculated over 1-minute segment within each 10-minute period. Bias, precision, and accuracy were calculated using Bland-Altman analyses and paired-difference equivalence tests were performed. Combined across the Presurgery and Postsurgery periods, the RVM's mean measurement bias (RVM - NM3 measurement) for MV was -3.8% (95% limits of agreement) (±1.96 SD): (-19.9% to 12.2%), for TV it was -4.9 (-21.0% to 11.3%), and for RR it was 1.1% (-4.1% to 6.2%). The mean measurement accuracies for MV, TV, and RR were 11.9%, 12.0%, and 4.2% (0.6 breaths/min), respectively. Note that lower accuracy numbers correspond to more accurate RVM measurements. The equivalence tests rejected the null hypothesis that the RVM and NM3 have different mean values and conclude with 90% power that the measurements of MV, TV, and RR from the RVM and NM3 are equivalent within ±10%. Our data indicate acceptable agreement between RVM and NM3 measurements in pediatric mechanically-ventilated patients. Future studies assessing the capability of the RVM to detect respiratory compromise in other clinical settings are needed.

  7. The effects of aging on the speed-accuracy compromise: Boundary optimality in the diffusion model.

    PubMed

    Starns, Jeffrey J; Ratcliff, Roger

    2010-06-01

    We evaluated age-related differences in the optimality of decision boundary settings in a diffusion model analysis. In the model, the width of the decision boundary represents the amount of evidence that must accumulate in favor of a response alternative before a decision is made. Wide boundaries lead to slow but accurate responding, and narrow boundaries lead to fast but inaccurate responding. There is a single value of boundary separation that produces the most correct answers in a given period of time, and we refer to this value as the reward rate optimal boundary (RROB). We consistently found across a variety of decision tasks that older adults used boundaries that were much wider than the RROB value. Young adults used boundaries that were closer to the RROB value, although age differences in optimality were smaller with instructions emphasizing speed than with instructions emphasizing accuracy. Young adults adjusted their boundary settings to more closely approach the RROB value when they were provided with accuracy feedback and extensive practice. Older participants showed no evidence of making boundary adjustments in response to feedback or task practice, and they consistently used boundary separation values that produced accuracy levels that were near asymptote. Our results suggest that young adults attempt to balance speed and accuracy to achieve the most correct answers per unit time, whereas older adultts attempt to minimize errors even if they must respond quite slowly to do so. (c) 2010 APA, all rights reserved

  8. Predictions of Daily Milk and Fat Yields, Major Groups of Fatty Acids, and C18:1 cis-9 from Single Milking Data without a Milking Interval

    PubMed Central

    Arnould, Valérie M. R.; Reding, Romain; Bormann, Jeanne; Gengler, Nicolas; Soyeurt, Hélène

    2015-01-01

    Simple Summary Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations. Abstract Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits [milk yield (expected AM, expected PM, and expected daily); fat content (expected AM, expected PM, and expected daily); fat yield (expected AM, expected PM, and expected daily; g/day); levels of seven different FAs or FA groups (expected AM, expected PM, and expected daily; g/dL milk), and the corresponding FA yields for these seven FA types/groups (expected AM, expected PM, and expected daily; g/day)]. These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for models which included a MI effect. The corresponding correlation values ranged from 96.4% to 97.6% when the daily yields were estimated from the AM milkings and ranged from 96.9% to 98.3% when the daily yields were estimated from the PM milkings. The simplicity of these proposed models should facilitate their use by breeding and milk recording organizations. PMID:26479379

  9. Analytical Models of Exoplanetary Atmospheres. IV. Improved Two-stream Radiative Transfer for the Treatment of Aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heng, Kevin; Kitzmann, Daniel, E-mail: kevin.heng@csh.unibe.ch, E-mail: daniel.kitzmann@csh.unibe.ch

    We present a novel generalization of the two-stream method of radiative transfer, which allows for the accurate treatment of radiative transfer in the presence of strong infrared scattering by aerosols. We prove that this generalization involves only a simple modification of the coupling coefficients and transmission functions in the hemispheric two-stream method. This modification originates from allowing the ratio of the first Eddington coefficients to depart from unity. At the heart of the method is the fact that this ratio may be computed once and for all over the entire range of values of the single-scattering albedo and scattering asymmetrymore » factor. We benchmark our improved two-stream method by calculating the fraction of flux reflected by a single atmospheric layer (the reflectivity) and comparing these calculations to those performed using a 32-stream discrete-ordinates method. We further compare our improved two-stream method to the two-stream source function (16 streams) and delta-Eddington methods, demonstrating that it is often more accurate at the order-of-magnitude level. Finally, we illustrate its accuracy using a toy model of the early Martian atmosphere hosting a cloud layer composed of carbon dioxide ice particles. The simplicity of implementation and accuracy of our improved two-stream method renders it suitable for implementation in three-dimensional general circulation models. In other words, our improved two-stream method has the ease of implementation of a standard two-stream method, but the accuracy of a 32-stream method.« less

  10. Development and validation of a web-based questionnaire for surveying the health and working conditions of high-performance marine craft populations

    PubMed Central

    de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl

    2016-01-01

    Background High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. Aim To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. Method A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. Results The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. Conclusions The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. PMID:27324717

  11. AHA! Version 2.0: More Adaptation Flexibility for Authors.

    ERIC Educational Resources Information Center

    De Bra, Paul; Aerts, Ad; Smits, David; Stash, Natalia

    AHA! is a simple Web-based adaptive hypermedia system. Because of this simplicity it has been studied and experimented with in several research groups. This paper identifies shortcomings in AHA! and presents AHA! version 2.0 which tries to overcome the known problems with AHA! while maintaining its biggest asset: simplicity. The paper illustrates…

  12. Occam's Rattle: Children's Use of Simplicity and Probability to Constrain Inference

    ERIC Educational Resources Information Center

    Bonawitz, Elizabeth Baraff; Lombrozo, Tania

    2012-01-01

    A growing literature suggests that generating and evaluating explanations is a key mechanism for learning and inference, but little is known about how children generate and select competing explanations. This study investigates whether young children prefer explanations that are simple, where simplicity is quantified as the number of causes…

  13. Gauss Modular-Arithmetic Congruence = Signal X Noise PRODUCT: Clock-model Archimedes HYPERBOLICITY Centrality INEVITABILITY: Definition: Complexity= UTTER-SIMPLICITY: Natural-Philosophy UNITY SIMPLICITY Redux!!!

    NASA Astrophysics Data System (ADS)

    Kummer, E. E.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Clock-model Archimedes [http://linkage.rockeller.edu/ wli/moved.8.04/ 1fnoise/ index. ru.html] HYPERBOLICITY inevitability throughout physics/pure-maths: Newton-law F=ma, Heisenberg and classical uncertainty-principle=Parseval/Plancherel-theorems causes FUZZYICS definition: (so miscalled) "complexity" = UTTER-SIMPLICITY!!! Watkins[www.secamlocal.ex.ac.uk/people/staff/mrwatkin/]-Hubbard[World According to Wavelets (96)-p.14!]-Franklin[1795]-Fourier[1795;1822]-Brillouin[1922] dual/inverse-space(k,w) analysis key to Fourier-unification in Archimedes hyperbolicity inevitability progress up Siegel cognition hierarchy-of-thinking (HoT): data-info.-know.-understand.-meaning-...-unity-simplicity = FUZZYICS!!! Frohlich-Mossbauer-Goldanskii-del Guidice [Nucl.Phys.B:251,375(85);275,185 (86)]-Young [arXiv-0705.4678y2, (5/31/07] theory of health/life=aqueous-electret/ ferroelectric protoplasm BEC = Archimedes-Siegel [Schrodinger Cent.Symp.(87); Symp.Fractals, MRS Fall Mtg.(89)-5-pprs] 1/w-"noise" Zipf-law power-spectrum hyperbolicity INEVITABILITY= Chi; Dirac delta-function limit w=0 concentration= BEC = Chi-Quong.

  14. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  15. Effect of seabed roughness on tidal current turbines

    NASA Astrophysics Data System (ADS)

    Gupta, Vikrant; Wan, Minping

    2017-11-01

    Tidal current turbines are shown to have potential to generate clean energy for a negligible environmental impact. These devices, however, operate in high to moderate current regions where the flow is highly turbulent. It has been shown in flume tank experiments at IFREMER in Boulogne-Sur-Mer (France) and NAFL in the University of Minnesota (US) that the level of turbulence and boundary layer profile affect a turbine's power output and wake characteristics. A major factor that determines these marine flow characteristics is the seabed roughness. Experiments, however, cannot simulate the high Reynolds number conditions of real marine flows. For that, we rely on numerical simulations. High accuracy numerical methods, such as DNS, of wall-bounded flows are very expensive, where the number of grid-points needed to resolve the flow varies as (Re) 9 / 4 (where Re is the flow Reynolds number). While numerically affordable RANS methods compromise on accuracy. Wall-modelled LES methods, which provide both accuracy and affordability, have been improved tremendously in the recent years. We discuss the application of such numerical methods for studying the effect of seabed roughness on marine flow features and their impact on turbine power output and wake characteristics. NSFC, Project Number 11672123.

  16. Effect of Reduced Tube Voltage on Diagnostic Accuracy of CT Colonography.

    PubMed

    Futamata, Yoshihiro; Koide, Tomoaki; Ihara, Riku

    2017-01-01

    The normal tube voltage in computed tomography colonography (CTC) is 120 kV. Some reports indicate that the use of a low tube voltage (lower than 120 kV) technique plays a significant role in reduction of radiation dose. However, to determine whether a lower tube voltage can reduce radiation dose without compromising diagnostic accuracy, an evaluation of images that are obtained while maintaining the volume CT dose index (CTDI vol ) is required. This study investigated the effect of reduced tube voltage in CTC, without modifying radiation dose (i.e. constant CTDI vol ), on image quality. Evaluation of image quality involved the shape of the noise power spectrum, surface profiling with volume rendering (VR), and receiver operating characteristic (ROC) analysis. The shape of the noise power spectrum obtained with a tube voltage of 80 kV and 100 kV was not similar to the one produced with a tube voltage of 120 kV. Moreover, a higher standard deviation was observed on volume-rendered images that were generated using the reduced tube voltages. In addition, ROC analysis revealed a statistically significant drop in diagnostic accuracy with reduced tube voltage, revealing that the modification of tube voltage affects volume-rendered images. The results of this study suggest that reduction of tube voltage in CTC, so as to reduce radiation dose, affects image quality and diagnostic accuracy.

  17. Nonlinear dispersion effects in elastic plates: numerical modelling and validation

    NASA Astrophysics Data System (ADS)

    Kijanka, Piotr; Radecki, Rafal; Packo, Pawel; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.

    2017-04-01

    Nonlinear features of elastic wave propagation have attracted significant attention recently. The particular interest herein relates to complex wave-structure interactions, which provide potential new opportunities for feature discovery and identification in a variety of applications. Due to significant complexity associated with wave propagation in nonlinear media, numerical modeling and simulations are employed to facilitate design and development of new measurement, monitoring and characterization systems. However, since very high spatio- temporal accuracy of numerical models is required, it is critical to evaluate their spectral properties and tune discretization parameters for compromise between accuracy and calculation time. Moreover, nonlinearities in structures give rise to various effects that are not present in linear systems, e.g. wave-wave interactions, higher harmonics generation, synchronism and | recently reported | shifts to dispersion characteristics. This paper discusses local computational model based on a new HYBRID approach for wave propagation in nonlinear media. The proposed approach combines advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE). The methods are investigated in the context of their accuracy for predicting nonlinear wavefields, in particular shifts to dispersion characteristics for finite amplitude waves and secondary wavefields. The results are validated against Finite Element (FE) calculations for guided waves in copper plate. Critical modes i.e., modes determining accuracy of a model at given excitation frequency - are identified and guidelines for numerical model parameters are proposed.

  18. Efficient use of unlabeled data for protein sequence classification: a comparative study

    PubMed Central

    Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir

    2009-01-01

    Background Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags–the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Results Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. Conclusion The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably. PMID:19426450

  19. Hair sparing does not compromise real-time magnetic resonance imaging guided stereotactic laser fiber placement for temporal lobe epilepsy.

    PubMed

    Singh, Shikha; Kumar, Kevin K; Rabon, Matthew J; Dolce, Dana; Halpern, Casey H

    2018-06-01

    Pre-operative scalp shaving is conventionally thought to simplify postoperative cranial wound care, lower the rate of wound infections, and ease optimal incision localization. Over the past few decades, some neurosurgeons have refrained from scalp shaving in order to improve patient satisfaction with brain surgery. However, this hair-sparing approach has not yet been explored in the growing field of magnetic resonance-guided laser interstitial thermal therapy (MRgLITT). This study investigated the initial impact of a no-shave technique on post-operative wound infection rate as well as on entry and target accuracy in MRgLITT for mesial temporal epilepsy. Eighteen patients selected by the Stanford Comprehensive Epilepsy Program between November 2015 and August 2017 were included in the study. All patients underwent functional selective amygdalohippocampotomies using MRgLITT entirely within a diagnostic MRI suite. No hair was removed and no additional precautions were taken for hair or scalp care. Otherwise, routine protocols for surgical preparations and wound closure were followed. The study was performed under approval from Stanford University's Internal Review Board (IRB-37830). No post-operative wound infections or erosions occurred for any patient. The mean entry point error was 2.87 ± 1.3 mm and the mean target error was 1.0 ± 0.9 mm. There have been no other complications associated with this hair-sparing approach. The study's results suggest that hair sparing in MRgLITT surgery for temporal epilepsy does not increase the risk of wound complications or compromise accuracy. This preferred cosmetic approach may thus appeal to epilepsy patients considering such interventions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  1. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.

    PubMed

    Chien, Tsair-Wei; Lin, Weir-Sen

    2016-03-02

    The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.

  2. On the Argument of Simplicity in "Elements" and Schoolbooks of Geometry

    ERIC Educational Resources Information Center

    Barbin, Evelyne

    2007-01-01

    Simplicity arguments are to be found in most geometrical works, from those of Proclus in his "Commentaries on the First Book of Euclid's Elements," up to those of contemporary manuals. Our goal is to read these arguments in their historical contexts to analyze agreements, disagreements and the multiplicity of points of view. For a better…

  3. Simplicity and complexity

    NASA Astrophysics Data System (ADS)

    Crutchfield, James; Wiesner, Karoline

    2010-02-01

    Is anything ever simple? When confronted with a complicated system, scientists typically strive to identify underlying simplicity, which we articulate as natural laws and fundamental principles. This simplicity is what makes nature appear so organized. Atomic physics, for example, approached a solid theoretical foundation when Niels Bohr uncovered the organization of electronic energy levels, which only later were redescribed as quantum wavefunctions. Charles Darwin's revolutionary idea about the "origin" of species emerged by mapping how species are organized and discovering why they came to be that way. And James Watson and Francis Crick's interpretation of DNA diffraction spectra was a discovery of the structural organization of genetic information - it was neither about the molecule's disorder (thermodynamic entropy) nor about the statistical randomness of its base-pair sequences.

  4. About the inevitable compromise between spatial resolution and accuracy of strain measurement for bone tissue: a 3D zero-strain study.

    PubMed

    Dall'Ara, E; Barber, D; Viceconti, M

    2014-09-22

    The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au; Poulos, Ann E; Houssami, Nehmat

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median =more » 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.« less

  6. Commissioning and quality assurance of an integrated system for patient positioning and setup verification in particle therapy.

    PubMed

    Pella, A; Riboldi, M; Tagaste, B; Bianculli, D; Desplanques, M; Fontana, G; Cerveri, P; Seregni, M; Fattori, G; Orecchia, R; Baroni, G

    2014-08-01

    In an increasing number of clinical indications, radiotherapy with accelerated particles shows relevant advantages when compared with high energy X-ray irradiation. However, due to the finite range of ions, particle therapy can be severely compromised by setup errors and geometric uncertainties. The purpose of this work is to describe the commissioning and the design of the quality assurance procedures for patient positioning and setup verification systems at the Italian National Center for Oncological Hadrontherapy (CNAO). The accuracy of systems installed in CNAO and devoted to patient positioning and setup verification have been assessed using a laser tracking device. The accuracy in calibration and image based setup verification relying on in room X-ray imaging system was also quantified. Quality assurance tests to check the integration among all patient setup systems were designed, and records of daily QA tests since the start of clinical operation (2011) are presented. The overall accuracy of the patient positioning system and the patient verification system motion was proved to be below 0.5 mm under all the examined conditions, with median values below the 0.3 mm threshold. Image based registration in phantom studies exhibited sub-millimetric accuracy in setup verification at both cranial and extra-cranial sites. The calibration residuals of the OTS were found consistent with the expectations, with peak values below 0.3 mm. Quality assurance tests, daily performed before clinical operation, confirm adequate integration and sub-millimetric setup accuracy. Robotic patient positioning was successfully integrated with optical tracking and stereoscopic X-ray verification for patient setup in particle therapy. Sub-millimetric setup accuracy was achieved and consistently verified in daily clinical operation.

  7. Photoacoustic-based sO2 estimation through excised bovine prostate tissue with interstitial light delivery.

    PubMed

    Mitcham, Trevor; Taghavi, Houra; Long, James; Wood, Cayla; Fuentes, David; Stefan, Wolfgang; Ward, John; Bouchard, Richard

    2017-09-01

    Photoacoustic (PA) imaging is capable of probing blood oxygen saturation (sO 2 ), which has been shown to correlate with tissue hypoxia, a promising cancer biomarker. However, wavelength-dependent local fluence changes can compromise sO 2 estimation accuracy in tissue. This work investigates using PA imaging with interstitial irradiation and local fluence correction to assess precision and accuracy of sO 2 estimation of blood samples through ex vivo bovine prostate tissue ranging from 14% to 100% sO 2 . Study results for bovine blood samples at distances up to 20 mm from the irradiation source show that local fluence correction improved average sO 2 estimation error from 16.8% to 3.2% and maintained an average precision of 2.3% when compared to matched CO-oximeter sO 2 measurements. This work demonstrates the potential for future clinical translation of using fluence-corrected and interstitially driven PA imaging to accurately and precisely assess sO 2 at depth in tissue with high resolution.

  8. Applying artificial intelligence technology to support decision-making in nursing: A case study in Taiwan.

    PubMed

    Liao, Pei-Hung; Hsu, Pei-Ti; Chu, William; Chu, Woei-Chyn

    2015-06-01

    This study applied artificial intelligence to help nurses address problems and receive instructions through information technology. Nurses make diagnoses according to professional knowledge, clinical experience, and even instinct. Without comprehensive knowledge and thinking, diagnostic accuracy can be compromised and decisions may be delayed. We used a back-propagation neural network and other tools for data mining and statistical analysis. We further compared the prediction accuracy of the previous methods with an adaptive-network-based fuzzy inference system and the back-propagation neural network, identifying differences in the questions and in nurse satisfaction levels before and after using the nursing information system. This study investigated the use of artificial intelligence to generate nursing diagnoses. The percentage of agreement between diagnoses suggested by the information system and those made by nurses was as much as 87 percent. When patients are hospitalized, we can calculate the probability of various nursing diagnoses based on certain characteristics. © The Author(s) 2013.

  9. Analysis of algebraic reconstruction technique for accurate imaging of gas temperature and concentration based on tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He

    2016-06-01

    An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2014YQ060537), and the National Basic Research Program, China (Grant No. 2013CB632803).

  10. Two-Relaxation-Time Lattice Boltzmann Method for Advective-Diffusive-Reactive Transport

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Hilpert, M.

    2016-12-01

    The lattice Boltzmann method (LBM) has been applied to study a wide range of reactive transport in porous and fractured media. The single-relaxation-time (SRT) LBM, employing single relaxation time, is the most popular LBM due to its simplicity of understanding and implementation. Nevertheless, the SRT LBM may suffer from numerical instability for small value of the relaxation time. By contrast, the multiple-relaxation-time (MRT) LBM, employing multiple relaxation times, can improve the numerical stability through tuning the multiple relaxation times, but the complexity of implementing this method restricts its applications. The two-relaxation-time (TRT) LBM, which employs two relaxation times, combines the advantages of SRT and MRT LBMs. The TRT LBM can produce simulations with better accuracy and stability than the SRT one, and is easier to implement than the MRT one. This work evaluated the numerical accuracy and stability of the TRT method by comparing the simulation results with analytical solutions of Gaussian hill transport and Taylor dispersion under different advective velocities. The accuracy generally increased with the tunable relaxation time τ, and the stability first increased and then decreased as τ increased, showing an optimal TRT method emerging the best numerical stability. The free selection of τ enabled the TRT LBM to simulate the Gaussian hill transport and Taylor dispersion under relatively high advective velocity, under which the SRT LBM suffered from numerical instability. Finally, the TRT method was applied to study the contaminant degradation by chemotactic microorganisms in porous media, which acted as a reprehensive of reactive transport in this study, and well predicted the evolution of microorganisms and degradation of contaminants for different transport scenarios. To sum up, the TRT LBM produced simulation results with good accuracy and stability for various advective-diffusive-reactive transport through tuning the relaxation time τ, illustrating its potential to study various biogeochemical behaviors in the subsurface environment.

  11. Comparison of VFA titration procedures used for monitoring the biogas process.

    PubMed

    Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng; Angelidaki, Irini

    2014-05-01

    Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different titration procedures. Currently, no standardized procedure is used and it is therefore difficult to compare the performance among plants. The aim of this study was to evaluate four titration procedures (for determination of VFA-levels of digested manure samples) and compare results with gas chromatographic (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40 mL of four times diluted digested manure was gently stirred (200 rpm). Results from samples with different VFA concentrations (1-11 g/L) showed linear correlation between titration results and GC measurements. However, determination of VFA by titration generally overestimated the VFA contents compared with GC measurements when samples had low VFA concentrations, i.e. around 1 g/L. The accuracy of titration increased when samples had high VFA concentrations, i.e. around 5 g/L. It was further found that the studied ionisable interfering components had lowest effect on titration when the sample had high VFA concentration. In contrast, bicarbonate, phosphate and lactate had significant effect on titration accuracy at low VFA concentration. An extended 5-point titration procedure with pH correction was best to handle interferences from bicarbonate, phosphate and lactate at low VFA concentrations. Contrary, the simplest titration procedure with only two pH end-points showed the highest accuracy among all titration procedures at high VFA concentrations. All in all, if the composition of the digested manure sample is not known, the procedure with only two pH end-points should be the procedure of choice, due to its simplicity and accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, R; Block, A; Harkenrider, M

    2015-06-15

    Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking,more » we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.« less

  13. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076

  14. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population.

    PubMed

    Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study.

  15. Low-Cost 3-D Flow Estimation of Blood With Clutter.

    PubMed

    Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali

    2017-05-01

    Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.

  16. Patient-specific instrument can achieve same accuracy with less resection time than navigation assistance in periacetabular pelvic tumor surgery: a cadaveric study.

    PubMed

    Wong, Kwok-Chuen; Sze, Kwan-Yik; Wong, Irene Oi-Ling; Wong, Chung-Ming; Kumta, Shekhar-Madhukar

    2016-02-01

    Inaccurate resection in pelvic tumors can result in compromised margins with increase local recurrence. Navigation-assisted and patient-specific instrument (PSI) techniques have recently been reported in assisting pelvic tumor surgery with the tendency of improving surgical accuracy. We examined and compared the accuracy of transferring a virtual pelvic resection plan to actual surgery using navigation-assisted or PSI technique in a cadaver study. We performed CT scan in twelve cadaveric bodies including whole pelvic bones. Either supraacetabular or partial acetabular resection was virtually planned in a hemipelvis using engineering software. The virtual resection plan was transferred to a CT-based navigation system or was used for design and fabrication of PSI. Pelvic resections were performed using navigation assistance in six cadavers and PSI in another six. Post-resection images were co-registered with preoperative planning for comparative analysis of resection accuracy in the two techniques. The mean average deviation error from the planned resection was no different ([Formula: see text]) for the navigation and the PSI groups: 1.9 versus 1.4 mm, respectively. The mean time required for the bone resection was greater ([Formula: see text]) for the navigation group than for the PSI group: 16.2 versus 1.1 min, respectively. In simulated periacetabular pelvic tumor resections, PSI technique enabled surgeons to reproduce the virtual surgical plan with similar accuracy but with less bone resection time when compared with navigation assistance. Further studies are required to investigate the clinical benefits of PSI technique in pelvic tumor surgery.

  17. Trouble Brewing: Using Observations of Invariant Behavior to Detect Malicious Agency in Distributed Control Systems

    NASA Astrophysics Data System (ADS)

    McEvoy, Thomas Richard; Wolthusen, Stephen D.

    Recent research on intrusion detection in supervisory data acquisition and control (SCADA) and DCS systems has focused on anomaly detection at protocol level based on the well-defined nature of traffic on such networks. Here, we consider attacks which compromise sensors or actuators (including physical manipulation), where intrusion may not be readily apparent as data and computational states can be controlled to give an appearance of normality, and sensor and control systems have limited accuracy. To counter these, we propose to consider indirect relations between sensor readings to detect such attacks through concurrent observations as determined by control laws and constraints.

  18. Meraculous2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-01

    meraculous2 is a whole genome shotgun assembler for short-reads that is capable of assembling large, polymorphic genomes with modest computational requirements. Meraculous relies on an efficient and conservative traversal of the subgraph of the k-mer (deBruijn) graph of oligonucleotides with unique high quality extensions in the dataset, avoiding an explicit error correction step as used in other short-read assemblers. Additional features include (1) handling of allelic variation using "bubble" structures within the deBruijn graph, (2) gap closing of repetitive and low quality regions using localized assemblies, and (3) an improved scaffolding algorithm that produces more complete assemblies without compromising onmore » scaffolding accuracy« less

  19. Caveat actor, Caveat emptor: some notes on some hazards of Tinseltown teaching.

    PubMed

    Greenberg, Harvey Roy

    2009-06-01

    The use of films in teaching psychiatry and psychotherapy remains problematic for a number of reasons. The bulk of films are made for commercial reasons, not for educational purposes. Scientific truth is often overshadowed by narrative requirement in films. In most 'mainstream' cinema and 'indie' productions, diagnostic accuracy is still seriously compromised by narrative considerations. Clinical reality continues to be undermined and overridden by the need--as makers see it--to tell a powerful story in aid of huge box office receipts. Therapists in films are also often caricatures and caution must be employed in using cinema in real-time individual therapy.

  20. Morphology-Induced Information Transfer in Bat Sonar

    NASA Astrophysics Data System (ADS)

    Reijniers, Jonas; Vanderelst, Dieter; Peremans, Herbert

    2010-10-01

    It has been argued that an important part of understanding bat echolocation comes down to understanding the morphology of the bat sound processing apparatus. In this Letter we present a method based on information theory that allows us to assess target localization performance of bat sonar, without a priori knowledge on the position, size, or shape of the reflecting target. We demonstrate this method using simulated directivity patterns of the frequency-modulated bat Micronycteris microtis. The results of this analysis indicate that the morphology of this bat’s sound processing apparatus has evolved to be a compromise between sensitivity and accuracy with the pinnae and the noseleaf playing different roles.

  1. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  2. Motor performance of tongue with a computer-integrated system under different levels of background physical exertion

    PubMed Central

    Huo, Xueliang; Johnson-Long, Ashley N.; Ghovanloo, Maysam; Shinohara, Minoru

    2015-01-01

    The purpose of this study was to compare the motor performance of tongue, using Tongue Drive System, to hand operation for relatively complex tasks under different levels of background physical exertion. Thirteen young able-bodied adults performed tasks that tested the accuracy and variability in tracking a sinusoidal waveform, and the performance in playing two video games that require accurate and rapid movements with cognitive processing using tongue and hand under two levels of background physical exertion. Results show additional background physical activity did not influence rapid and accurate displacement motor performance, but compromised the slow waveform tracking and shooting performances in both hand and tongue. Slow waveform tracking performance by the tongue was compromised with an additional motor or cognitive task, but with an additional motor task only for the hand. Practitioner Summary We investigated the influence of task complexity and background physical exertion on the motor performance of tongue and hand. Results indicate the task performance degrades with an additional concurrent task or physical exertion due to the limited attentional resources available for handling both the motor task and background exertion. PMID:24003900

  3. Ising model for collective decision making during group motion

    NASA Astrophysics Data System (ADS)

    Pinkoviezky, Itai; Gov, Nir; Couzin, Iain

    Collective decision making is a key feature during natural motion of animal groups and is also crucial for human groups. This phenomenon can be exemplified by the scenario of two subgroups that hold conflicting preferred directions of motion. The constraint of group cohesion drives the motion either towards a compromise or towards one of the preferred targets. The transition between compromise and decision has been found in simulations of flock models, but the nature of this transition is not well understood. We present a minimal spin model for this system where we interpret the spin-spin interaction as a social force. This model exhibits both first and second order transitions. The group motion changes from size-dependent diffusion at high temperatures to run-and-tumble motion below the critical temperature. In the presence of minority and majority subgroups, we find that there is a trade-off between the speed of reaching a target and the accuracy. We then compare the results of the spin model to detailed simulations of a flock model, and find overall very similar dynamics, with the role of the temperature taken by the inverse of the number of uninformed individuals.

  4. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    PubMed

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  5. Symmetric rotating-wave approximation for the generalized single-mode spin-boson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Victor V.; Scholes, Gregory D.; Brumer, Paul

    2011-10-15

    The single-mode spin-boson model exhibits behavior not included in the rotating-wave approximation (RWA) in the ultra and deep-strong coupling regimes, where counter-rotating contributions become important. We introduce a symmetric rotating-wave approximation that treats rotating and counter-rotating terms equally, preserves the invariances of the Hamiltonian with respect to its parameters, and reproduces several qualitative features of the spin-boson spectrum not present in the original rotating-wave approximation both off-resonance and at deep-strong coupling. The symmetric rotating-wave approximation allows for the treatment of certain ultra- and deep-strong coupling regimes with similar accuracy and mathematical simplicity as does the RWA in the weak-coupling regime.more » Additionally, we symmetrize the generalized form of the rotating-wave approximation to obtain the same qualitative correspondence with the addition of improved quantitative agreement with the exact numerical results. The method is readily extended to higher accuracy if needed. Finally, we introduce the two-photon parity operator for the two-photon Rabi Hamiltonian and obtain its generalized symmetric rotating-wave approximation. The existence of this operator reveals a parity symmetry similar to that in the Rabi Hamiltonian as well as another symmetry that is unique to the two-photon case, providing insight into the mathematical structure of the two-photon spectrum, significantly simplifying the numerics, and revealing some interesting dynamical properties.« less

  6. Evaluation of Pharmacokinetic Assumptions Using a 443 ...

    EPA Pesticide Factsheets

    With the increasing availability of high-throughput and in vitro data for untested chemicals, there is a need for pharmacokinetic (PK) models for in vitro to in vivo extrapolation (IVIVE). Though some PBPK models have been created for individual compounds using in vivo data, we are now able to rapidly parameterize generic PBPK models using in vitro data to allow IVIVE for chemicals tested for bioactivity via high-throughput screening. However, these new models are expected to have limited accuracy due to their simplicity and generalization of assumptions. We evaluated the assumptions and performance of a generic PBPK model (R package “httk”) parameterized by a library of in vitro PK data for 443 chemicals. We evaluate and calibrate Schmitt’s method by comparing the predicted volume of distribution (Vd) and tissue partition coefficients to in vivo measurements. The partition coefficients are initially over predicted, likely due to overestimation of partitioning into phospholipids in tissues and the lack of lipid partitioning in the in vitro measurements of the fraction unbound in plasma. Correcting for phospholipids and plasma binding improved the predictive ability (R2 to 0.52 for partition coefficients and 0.32 for Vd). We lacked enough data to evaluate the accuracy of changing the model structure to include tissue blood volumes and/or separate compartments for richly/poorly perfused tissues, therefore we evaluated the impact of these changes on model

  7. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  8. Sleep versus wake classification from heart rate variability using computational intelligence: consideration of rejection in classification models.

    PubMed

    Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie

    2008-01-01

    Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.

  9. Catalytic spectrophotometric determination of iodine in coal by pyrohydrolysis decomposition.

    PubMed

    Wu, Daishe; Deng, Haiwen; Wang, Wuyi; Xiao, Huayun

    2007-10-10

    A method for the determination of iodine in coal using pyrohydrolysis for sample decomposition was proposed. A pyrohydrolysis apparatus system was constructed, and the procedure was designed to burn and hydrolyse coal steadily and completely. The parameters of pyrohydrolysis were optimized through the orthogonal experimental design. Iodine in the absorption solution was evaluated by the catalytic spectrophotometric method, and the absorbance at 420 nm was measured by a double-beam UV-visible spectrophotometer. The limit of detection and quantification of the proposed method were 0.09 microg g(-1) and 0.29 microg g(-1), respectively. After analysing some Chinese soil reference materials (SRMs), a reasonable agreement was found between the measured values and the certified values. The accuracy of this approach was confirmed by the analysis of eight coals spiked with SRMs with an indexed recovery from 94.97 to 109.56%, whose mean value was 102.58%. Six repeated tests were conducted for eight coal samples, including high sulfur coal and high fluorine coal. A good repeatability was obtained with a relative standard deviation value from 2.88 to 9.52%, averaging 5.87%. With such benefits as simplicity, precision, accuracy and economy, this approach can meet the requirements of the limits of detection and quantification for analysing iodine in coal, and hence it is highly suitable for routine analysis.

  10. Internal electric fields of electrolytic solutions induced by space-charge polarization

    NASA Astrophysics Data System (ADS)

    Sawada, Atsushi

    2006-10-01

    The dielectric dispersion of electrolytic solutions prepared using chlorobenzene as a solvent and tetrabutylammonium tetraphenylborate as a solute is analyzed in terms of space-charge polarization in order to derive the ionic constants, and the Stokes radius obtained is discussed in comparison with the values that have been measured by conductometry. A homogeneous internal electric field is assumed for simplicity in the analysis of the space-charge polarization. The justification of the approximation by the homogeneous field is discussed from two points of view: one is the accuracy of the Stokes radius value observed and the other is the effect of bound charges on electrodes in which they level the highly inhomogeneous field, which has been believed in the past. In order to investigate the actual electric field, numerical calculations based on the Poisson equation are carried out by considering the influence of the bound charges. The variation of the number of bound charges with time is clarified by determining the relaxation function of the dielectric constant attributed to the space-charge polarization. Finally, a technique based on a two-field approximation, where homogeneous and hyperbolic fields are independently applied in relevant frequency ranges, is introduced to analyze the space-charge polarization of the electrolytic solutions, and further improvement of the accuracy in the determination of the Stokes radius is achieved.

  11. Development of paper-based sensor coupled with smartphone detector for simple creatinine determination

    NASA Astrophysics Data System (ADS)

    Tambaru, David; Rupilu, Reski Helena; Nitti, Fidelis; Gauru, Imanuel; Suwari

    2017-03-01

    Creatinine level in urine is one of the most important indicators for kidney diseases. A routine assay for this compound is vital especially for those who suffer from kidney malfunction. However, the existing methods are mostly expensive, impractical and time consuming. Here in, we report a research on the development of sensor for creatinine analysis by using cheap materials such as paper and coupled with a smartphone as the detector leading to an inexpensive and free-instrument method. This research was done based on the Jaffe reaction in which the creatinine was reacted with picric acid in basic solution to form an orange-red creatinine-picric complex. The red-green-blue intensity of the complex, captured with a smartphone, was measured and then digitized with free-download Microsoft Visual c# 2010I Express applications, as the analytical response. This proposed method was evaluated based on its precision, accuracy, percent of recovery and limit of detection. It was found that the precision, accuracy, percent of recovery and limit of detection of this method were 5.55%, 0.74 %, 96.73 ± 6.12 % and 8.02 ppm, respectively. It can be concluded that the paper based sensors with digital imaging approach using Microsoft Visual c# 2010I Express,with its simplicity and affordabilitycan be applied for on-site determination of creatinine level.

  12. Step-height standards based on the rapid formation of monolayer steps on the surface of layered crystals

    NASA Astrophysics Data System (ADS)

    Komonov, A. I.; Prinz, V. Ya.; Seleznev, V. A.; Kokh, K. A.; Shlegel, V. N.

    2017-07-01

    Metrology is essential for nanotechnology, especially for structures and devices with feature sizes going down to nm. Scanning probe microscopes (SPMs) permits measurement of nanometer- and subnanometer-scale objects. Accuracy of size measurements performed using SPMs is largely defined by the accuracy of used calibration measures. In the present publication, we demonstrate that height standards of monolayer step (∼1 and ∼0.6 nm) can be easily prepared by cleaving Bi2Se3 and ZnWO4 layered single crystals. It was shown that the conducting surface of Bi2Se3 crystals offers height standard appropriate for calibrating STMs and for testing conductive SPM probes. Our AFM study of the morphology of freshly cleaved (0001) Bi2Se3 surfaces proved that such surfaces remained atomically smooth during a period of at least half a year. The (010) surfaces of ZnWO4 crystals remained atomically smooth during one day, but already two days later an additional nanorelief of amplitude ∼0.3 nm appeared on those surfaces. This relief, however, did not further grow in height, and it did not hamper the calibration. Simplicity and the possibility of rapid fabrication of the step-height standards, as well as their high stability, make these standards available for a great, permanently growing number of users involved in 3D printing activities.

  13. High order local absorbing boundary conditions for acoustic waves in terms of farfield expansions

    NASA Astrophysics Data System (ADS)

    Villamizar, Vianey; Acosta, Sebastian; Dastrup, Blake

    2017-03-01

    We devise a new high order local absorbing boundary condition (ABC) for radiating problems and scattering of time-harmonic acoustic waves from obstacles of arbitrary shape. By introducing an artificial boundary S enclosing the scatterer, the original unbounded domain Ω is decomposed into a bounded computational domain Ω- and an exterior unbounded domain Ω+. Then, we define interface conditions at the artificial boundary S, from truncated versions of the well-known Wilcox and Karp farfield expansion representations of the exact solution in the exterior region Ω+. As a result, we obtain a new local absorbing boundary condition (ABC) for a bounded problem on Ω-, which effectively accounts for the outgoing behavior of the scattered field. Contrary to the low order absorbing conditions previously defined, the error at the artificial boundary induced by this novel ABC can be easily reduced to reach any accuracy within the limits of the computational resources. We accomplish this by simply adding as many terms as needed to the truncated farfield expansions of Wilcox or Karp. The convergence of these expansions guarantees that the order of approximation of the new ABC can be increased arbitrarily without having to enlarge the radius of the artificial boundary. We include numerical results in two and three dimensions which demonstrate the improved accuracy and simplicity of this new formulation when compared to other absorbing boundary conditions.

  14. Full-waveform data for building roof step edge localization

    NASA Astrophysics Data System (ADS)

    Słota, Małgorzata

    2015-08-01

    Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.

  15. Higher-order time integration of Coulomb collisions in a plasma using Langevin equations

    DOE PAGES

    Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...

    2013-02-08

    The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less

  16. An innovative SNP genotyping method adapting to multiple platforms and throughputs.

    PubMed

    Long, Y M; Chao, W S; Ma, G J; Xu, S S; Qi, L L

    2017-03-01

    An innovative genotyping method designated as semi-thermal asymmetric reverse PCR (STARP) was developed for genotyping individual SNPs with improved accuracy, flexible throughputs, low operational costs, and high platform compatibility. Multiplex chip-based technology for genome-scale genotyping of single nucleotide polymorphisms (SNPs) has made great progress in the past two decades. However, PCR-based genotyping of individual SNPs still remains problematic in accuracy, throughput, simplicity, and/or operational costs as well as the compatibility with multiple platforms. Here, we report a novel SNP genotyping method designated semi-thermal asymmetric reverse PCR (STARP). In this method, genotyping assay was performed under unique PCR conditions using two universal priming element-adjustable primers (PEA-primers) and one group of three locus-specific primers: two asymmetrically modified allele-specific primers (AMAS-primers) and their common reverse primer. The two AMAS-primers each were substituted one base in different positions at their 3' regions to significantly increase the amplification specificity of the two alleles and tailed at 5' ends to provide priming sites for PEA-primers. The two PEA-primers were developed for common use in all genotyping assays to stringently target the PCR fragments generated by the two AMAS-primers with similar PCR efficiencies and for flexible detection using either gel-free fluorescence signals or gel-based size separation. The state-of-the-art primer design and unique PCR conditions endowed STARP with all the major advantages of high accuracy, flexible throughputs, simple assay design, low operational costs, and platform compatibility. In addition to SNPs, STARP can also be employed in genotyping of indels (insertion-deletion polymorphisms). As vast variations in DNA sequences are being unearthed by many genome sequencing projects and genotyping by sequencing, STARP will have wide applications across all biological organisms in agriculture, medicine, and forensics.

  17. First versus second trimester mean platelet volume and uric acid for prediction of preeclampsia in women at moderate and low risk.

    PubMed

    Rezk, Mohamed; Gaber, Wael; Shaheen, Abdelhamid; Nofal, Ahmed; Emara, Mahmoud; Gamal, Awni; Badr, Hassan

    2018-06-12

    To determine if second trimester mean platelet volume (MPV) and serum uric acid are reasonable predictors of preeclampsia (PE) or not, in patients at moderate and low risk. This prospective study was conducted on 9522 women at low or moderate risk for developing PE who underwent dual measurements of MPV and serum uric acid at late first trimester (10-12 weeks) and at second trimester (18-20 weeks) and subsequently divided into two groups; PE group (n = 286) who later developed PE and non-PE group (n = 9236). Test validity of MPV and serum uric acid was the primary outcome measure. Data were collected and analyzed. Second trimester MPV is a good predictor for development of PE at a cutoff value of 9.55 fL with area under the curve (AUC) of 0.86, sensitivity of 95.2%, specificity of 66.7%, positive predictive value (PPV) of 87%, negative predictive value (NPV) of 85.7%, and accuracy of 86.7%. Second trimester serum uric acid is a good predictor for development of PE at a cutoff value of 7.35 mg/dL, with AUC of 0.85, sensitivity of 95.2%, specificity of 55.6%, PPV of 83.3%, NPV of 83.3%, and accuracy of 83.3%. Combination of both tests has a sensitivity of 100%, specificity of 22.2%, PPV of 75%, NPV of 100%, and accuracy of 76.7%. Second trimester MPV and serum uric acid alone or in combination could be used as a useful biochemical markers for prediction of PE based on their validity, simplicity, and availability.

  18. A computer-based servo system for controlling isotonic contractions of muscle.

    PubMed

    Smith, J P; Barsotti, R J

    1993-11-01

    We have developed a computer-based servo system for controlling isotonic releases in muscle. This system is a composite of commercially available devices: an IBM personal computer, an analog-to-digital (A/D) board, an Akers AE801 force transducer, and a Cambridge Technology motor. The servo loop controlling the force clamp is generated by computer via the A/D board, using a program written in QuickBASIC 4.5. Results are shown that illustrate the ability of the system to clamp the force generated by either skinned cardiac trabeculae or single rabbit psoas fibers down to the resolution of the force transducer within 4 ms. This rate is independent of the level of activation of the tissue and the size of the load imposed during the release. The key to the effectiveness of the system consists of two algorithms that are described in detail. The first is used to calculate the error signal to hold force to the desired level. The second algorithm is used to calculate the appropriate gain of the servo for a particular fiber and the size of the desired load to be imposed. The results show that the described computer-based method for controlling isotonic releases in muscle represents a good compromise between simplicity and performance and is an alternative to the custom-built digital/analog servo devices currently being used in studies of muscle mechanics.

  19. Assessing Constituent Levels in Smokeless Tobacco Products: A New Approach to Engaging and Educating the Public

    PubMed Central

    Loken, Barbara; Williams, Allison L.; Vitriol, Joseph; Stepanov, Irina; Hatsukami, Dorothy

    2015-01-01

    Introduction: Providing accurate information about the constituents in nicotine-containing products may help tobacco users make informed decisions about product choices. An experimental study examined a novel approach for presenting accurate constituent information about brands and types of smokeless tobacco (SLT) that could be understood by the general public. Methods: Participants were recruited through Amazon’s Mechanical Turk and presented information online about 2 constituent dimensions of SLT products—nicotine and/or toxicity (for simplicity, “toxicity” in this study refers to carcinogenic constituents) Participants completed measures of knowledge and tobacco health risks at 2 time points: before and after exposure to constituent information. Results: Participants were found to increase their knowledge that toxicity contributes to disease risk and nicotine contributes to addiction, that SLT products vary in their levels of nicotine and toxicity, and that both SLT and cigarette products have higher toxicity than medicinal nicotine replacement therapies (e.g., nicotine lozenges). Study results showed no differences when presenting toxicity information alone versus presenting it in conjunction with nicotine information, and found no misperceptions or confusions about the relative harmfulness of cigarettes, SLT, or nicotine replacement therapy. Conclusions: Providing tobacco constituent information to smokers and nonsmokers will improve their knowledge about the relative toxicity across products and variations within a class of tobacco products without compromising the health risks associated with tobacco use. PMID:25634934

  20. Building an FTP guard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sands, P.D.

    1998-08-01

    Classified designs usually include lesser classified (including unclassified) components. An engineer working on such a design needs access to the various sub-designs at lower classification levels. For simplicity, the problem is presented with only two levels: high and low. If the low-classification component designs are stored in the high network, they become inaccessible to persons working on a low network. In order to keep the networks separate, the component designs may be duplicated in all networks, resulting in a synchronization problem. Alternatively, they may be stored in the low network and brought into the high network when needed. The lattermore » solution results in the use of sneaker-net (copying the files from the low system to a tape and carrying the tape to a high system) or a file transfer guard. This paper shows how an FTP Guard was constructed and implemented without degrading the security of the underlying B3 platform. The paper then shows how the guard can be extended to an FTP proxy server or an HTTP proxy server. The extension is accomplished by allowing the high-side user to select among items that already exist on the low-side. No high-side data can be directly compromised by the extension, but a mechanism must be developed to handle the low-bandwidth covert channel that would be introduced by the application.« less

  1. Lessons learned: Infrastructure development and financial management for large, publicly funded, international trials.

    PubMed

    Larson, Gregg S; Carey, Cate; Grarup, Jesper; Hudson, Fleur; Sachi, Karen; Vjecha, Michael J; Gordin, Fred

    2016-04-01

    Randomized clinical trials are widely recognized as essential to address worldwide clinical and public health research questions. However, their size and duration can overwhelm available public and private resources. To remain competitive in international research settings, advocates and practitioners of clinical trials must implement practices that reduce their cost. We identify approaches and practices for large, publicly funded, international trials that reduce cost without compromising data integrity and recommend an approach to cost reporting that permits comparison of clinical trials. We describe the organizational and financial characteristics of The International Network for Strategic Initiatives in Global HIV Trials, an infectious disease research network that conducts multiple, large, long-term, international trials, and examine challenges associated with simple and streamlined governance and an infrastructure and financial management model that is based on performance, transparency, and accountability. It is possible to reduce costs of participants' follow-up and not compromise clinical trial quality or integrity. The International Network for Strategic Initiatives in Global HIV Trials network has successfully completed three large HIV trials using cost-efficient practices that have not adversely affected investigator enthusiasm, accrual rates, loss-to-follow-up, adherence to the protocol, and completion of data collection. This experience is relevant to the conduct of large, publicly funded trials in other disease areas, particularly trials dependent on international collaborations. New approaches, or creative adaption of traditional clinical trial infrastructure and financial management tools, can render large, international clinical trials more cost-efficient by emphasizing structural simplicity, minimal up-front costs, payments for performance, and uniform algorithms and fees-for-service, irrespective of location. However, challenges remain. They include institutional resistance to financial change, growing trial complexity, and the difficulty of sustaining network infrastructure absent stable research work. There is also a need for more central monitoring, improved and harmonized regulations, and a widely applied metric for measuring and comparing cost efficiency in clinical trials. ClinicalTrials.gov is recommended as a location where standardized trial cost information could be made publicly accessible. © The Author(s) 2016.

  2. Embedded correlated wavefunction schemes: theory and applications.

    PubMed

    Libisch, Florian; Huang, Chen; Carter, Emily A

    2014-09-16

    Conspectus Ab initio modeling of matter has become a pillar of chemical research: with ever-increasing computational power, simulations can be used to accurately predict, for example, chemical reaction rates, electronic and mechanical properties of materials, and dynamical properties of liquids. Many competing quantum mechanical methods have been developed over the years that vary in computational cost, accuracy, and scalability: density functional theory (DFT), the workhorse of solid-state electronic structure calculations, features a good compromise between accuracy and speed. However, approximate exchange-correlation functionals limit DFT's ability to treat certain phenomena or states of matter, such as charge-transfer processes or strongly correlated materials. Furthermore, conventional DFT is purely a ground-state theory: electronic excitations are beyond its scope. Excitations in molecules are routinely calculated using time-dependent DFT linear response; however applications to condensed matter are still limited. By contrast, many-electron wavefunction methods aim for a very accurate treatment of electronic exchange and correlation. Unfortunately, the associated computational cost renders treatment of more than a handful of heavy atoms challenging. On the other side of the accuracy spectrum, parametrized approaches like tight-binding can treat millions of atoms. In view of the different (dis-)advantages of each method, the simulation of complex systems seems to force a compromise: one is limited to the most accurate method that can still handle the problem size. For many interesting problems, however, compromise proves insufficient. A possible solution is to break up the system into manageable subsystems that may be treated by different computational methods. The interaction between subsystems may be handled by an embedding formalism. In this Account, we review embedded correlated wavefunction (CW) approaches and some applications. We first discuss our density functional embedding theory, which is formally exact. We show how to determine the embedding potential, which replaces the interaction between subsystems, at the DFT level. CW calculations are performed using a fixed embedding potential, that is, a non-self-consistent embedding scheme. We demonstrate this embedding theory for two challenging electron transfer phenomena: (1) initial oxidation of an aluminum surface and (2) hot-electron-mediated dissociation of hydrogen molecules on a gold surface. In both cases, the interaction between gas molecules and metal surfaces were treated by sophisticated CW techniques, with the remainder of the extended metal surface being treated by DFT. Our embedding approach overcomes the limitations of conventional Kohn-Sham DFT in describing charge transfer, multiconfigurational character, and excited states. From these embedding simulations, we gained important insights into fundamental processes that are crucial aspects of fuel cell catalysis (i.e., O2 reduction at metal surfaces) and plasmon-mediated photocatalysis by metal nanoparticles. Moreover, our findings agree very well with experimental observations, while offering new views into the chemistry. We finally discuss our recently formulated potential-functional embedding theory that provides a seamless, first-principles way to include back-action onto the environment from the embedded region.

  3. Fast Implicit Methods For Elliptic Moving Interface Problems

    DTIC Science & Technology

    2015-12-11

    analyzed, and tested for the Fourier transform of piecewise polynomials given on d-dimensional simplices in D-dimensional Euclidean space. These transforms...evaluation, and one to three orders of magnitude slower than the classical uniform Fast Fourier Transform. Second, bilinear quadratures ---which...a fast algorithm was derived, analyzed, and tested for the Fourier transform of pi ecewise polynomials given on d-dimensional simplices in D

  4. Complex architecture of primes and natural numbers.

    PubMed

    García-Pérez, Guillermo; Serrano, M Ángeles; Boguñá, Marián

    2014-08-01

    Natural numbers can be divided in two nonoverlapping infinite sets, primes and composites, with composites factorizing into primes. Despite their apparent simplicity, the elucidation of the architecture of natural numbers with primes as building blocks remains elusive. Here, we propose a new approach to decoding the architecture of natural numbers based on complex networks and stochastic processes theory. We introduce a parameter-free non-Markovian dynamical model that naturally generates random primes and their relation with composite numbers with remarkable accuracy. Our model satisfies the prime number theorem as an emerging property and a refined version of Cramér's conjecture about the statistics of gaps between consecutive primes that seems closer to reality than the original Cramér's version. Regarding composites, the model helps us to derive the prime factors counting function, giving the probability of distinct prime factors for any integer. Probabilistic models like ours can help to get deeper insights about primes and the complex architecture of natural numbers.

  5. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  6. Development and validation of a stability-indicating LC method for the assay of lodenafil carbonate in tablets.

    PubMed

    Codevilla, Cristiane Franco; Lemos, Alice Machado; Delgado, Leila Schreiner; Rolim, Clarice Madalena Bueno; Adams, Andréa Inês Horn; Bergold, Ana Maria

    2011-08-01

    A stability-indicating liquid chromatographic method has been developed for the quantitative determination of lodenafil carbonate in tablets. The method employs a Synergi Fusion C18 column (250 × 4.6 mm, i.d., 4 μm particle size), with mobile phase consisting of a mixture of methanol-acetic acid 0.1% pH 4.0 (65:35, v/v) and UV detection at 290 nm, using a photodiode array detector. A linear response (r = 0.9999) was observed in the range of 10-80 μg/mL. The method showed good recoveries (average 100.3%) and also intra and inter-day precision (RSD < 2.0%). Validation parameters as specificity and robustness were also determined. Specificity analysis showed that no impurities or degradation products were co-eluting with the lodenafil carbonate peak. The method was found to be stability-indicating and due to its simplicity and accuracy can be applied for routine quality control analysis of lodenafil carbonate in tablets.

  7. Molecular Biosensors for Electrochemical Detection of Infectious Pathogens in Liquid Biopsies: Current Trends and Challenges

    PubMed Central

    Yáñez-Sedeño, Paloma

    2017-01-01

    Rapid and reliable diagnosis of infectious diseases caused by pathogens, and timely initiation of appropriate treatment are critical determinants to promote optimal clinical outcomes and general public health. Conventional in vitro diagnostics for infectious diseases are time-consuming and require centralized laboratories, experienced personnel and bulky equipment. Recent advances in electrochemical affinity biosensors have demonstrated to surpass conventional standards in regards to time, simplicity, accuracy and cost in this field. The tremendous potential offered by electrochemical affinity biosensors to detect on-site infectious pathogens at clinically relevant levels in scarcely treated body fluids is clearly stated in this review. The development and application of selected examples using different specific receptors, assay formats and electrochemical approaches focusing on the determination of specific circulating biomarkers of different molecular (genetic, regulatory and functional) levels associated with bacterial and viral pathogens are critically discussed. Existing challenges still to be addressed and future directions in this rapidly advancing and highly interesting field are also briefly pointed out. PMID:29099764

  8. Experimental investigation of gas flow rate and electric field effect on refractive index and electron density distribution of cold atmospheric pressure-plasma by optical method, Moiré deflectometry

    NASA Astrophysics Data System (ADS)

    Khanzadeh, Mohammad; Jamal, Fatemeh; Shariat, Mahdi

    2018-04-01

    Nowadays, cold atmospheric-pressure (CAP) helium plasma jets are widely used in material processing devices in various industries. Researchers often use indirect and spectrometric methods for measuring the plasma parameters which are very expensive. In this paper, for the first time, characterization of CAP, i.e., finding its parameters such as refractive index and electron density distribution, was carried out using an optical method, Moiré deflectometry. This method is a wave front analysis technique based on geometric optics. The advantages of this method are simplicity, high accuracy, and low cost along with the non-contact, non-destructive, and direct measurement of CAP parameters. This method demonstrates that as the helium gas flow rate decreases, the refractive index increases. Also, we must note that the refractive index is larger in the gas flow consisting of different flow rates of plasma comparing with the gas flow without the plasma.

  9. Re-examination of cumulative fatigue damage analysis - An engineering perspective

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1986-01-01

    A method which has evolved in the laboratories for the past 20 yr is re-examined with the intent of improving its accuracy and simplicity of application to engineering problems. Several modifications are introduced both to the analytical formulation of the Damage Curve Approach, and to the procedure for modifying this approach to achieve a Double Linear Damage Rule formulation which immensely simplifies the calculation. Improvements are also introduced in the treatment of mean stress for determining fatigue life of the individual events that enter into a complex loading history. While the procedure is completely consistent with the results of numerous two level tests that have been conducted on many materials, it is still necessary to verify applicability to complex loading histories. Caution is expressed that certain phenomenon can also influence the applicability - for example, unusual deformation and fracture modes inherent in complex loading especially if stresses are multiaxial. Residual stresses at crack tips, and metallurgical factors are also important in creating departures from the cumulative damage theories; examples of departures are provided.

  10. Re-examination of cumulative fatigue damage analysis: An engineering perspective

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1986-01-01

    A method which has evolved in our laboratories for the past 20 yr is re-examined with the intent of improving its accuracy and simplicity of application to engineering problems. Several modifications are introduced both to the analytical formulation of the Damage Curve Approach, and to the procedure for modifying this approach to achieve a Double Linear Damage Rule formulation which immensely simplifies the calculation. Improvements are also introduced in the treatment of mean stress for determining fatigue life of the individual events that enter into a complex loading history. While the procedure is completely consistent with the results of numerous two level tests that have been conducted on many materials, it is still necessary to verify applicability to complex loading histories. Caution is expressed that certain phenomena can also influence the applicability - for example, unusual deformation and fracture modes inherent in complex loading - especially if stresses are multiaxial. Residual stresses at crack tips, and metallurgical factors are also important in creating departures from the cumulative damage theories; examples of departures are provided.

  11. Objective grading of facial paralysis using Local Binary Patterns in video processing.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F

    2008-01-01

    This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  12. Measuring the Optical Performance of Evacuated Receivers via an Outdoor Thermal Transient Test: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutscher, C.; Burkholder, F.; Netter, J.

    2011-08-01

    Modern parabolic trough solar collectors operated at high temperatures to provide the heat input to Rankine steam power cycles employ evacuated receiver tubes along the collector focal line. High performance is achieved via the use of a selective surface with a high absorptance for incoming short-wave solar radiation and a low emittance for outgoing long-wave infrared radiation, as well as the use of a hard vacuum to essentially eliminate convective and conductive heat losses. This paper describes a new method that determines receiver overall optical efficiency by exposing a fluid-filled, pre-cooled receiver to one sun outdoors and measuring the slopemore » of the temperature curve at the point where the receiver temperature passes the glass envelope temperature (that is, the point at which there is no heat gain or loss from the absorber). This transient test method offers the potential advantages of simplicity, high accuracy, and the use of the actual solar spectrum.« less

  13. [Percutaneous cholangiography: Chiba method, a diagnostic advance].

    PubMed

    Correia, R A; Sampaio, R N; Soares, A C; Feijó, S G; Pessoa, J B

    1979-01-01

    Employing percutaneous transhepatic cholangiography by CHIBA method in 15 patients, it was possible to visualize biliary system in 93.3% of the cases. The radiologic diagnostic of KLATSKIN tumor was observed in 2 cases, 4 cases of carcinoma of the papila of VATER, 1 case of carcinoma of the terminal choledochus, 1 case of intrahepatic neoplasia, 1 case of stenosis secondary to choledocal, trauma, and in another it was damaged by subcapsular hepatic leakage of contrast. The complications were minor. In 5 cases the patients had biliary colic at the time of the exam. In 2 cases, signals of baeteremia occurred in the day following the exam, and in 4 cases there was a small subcapsular hepatic leakage of contrast. In one case it was encountered hematoma under the capsule of the liver, during surgery. The diagnosis was confirmed at surgery in 12 cases. We concluded that the simplicity of the technic, its low cost and its diagnostic, accuracy have made it extremely useful.

  14. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  15. A mass and momentum conserving unsplit semi-Lagrangian framework for simulating multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owkes, Mark, E-mail: mark.owkes@montana.edu; Desjardins, Olivier

    In this work, we present a computational methodology for convection and advection that handles discontinuities with second order accuracy and maintains conservation to machine precision. This method can transport a variety of discontinuous quantities and is used in the context of an incompressible gas–liquid flow to transport the phase interface, momentum, and scalars. The proposed method provides a modification to the three-dimensional, unsplit, second-order semi-Lagrangian flux method of Owkes & Desjardins (JCP, 2014). The modification adds a refined grid that provides consistent fluxes of mass and momentum defined on a staggered grid and discrete conservation of mass and momentum, evenmore » for flows with large density ratios. Additionally, the refined grid doubles the resolution of the interface without significantly increasing the computational cost over previous non-conservative schemes. This is possible due to a novel partitioning of the semi-Lagrangian fluxes into a small number of simplices. The proposed scheme is tested using canonical verification tests, rising bubbles, and an atomizing liquid jet.« less

  16. Development and validation of a high performance liquid chromatographic method for simultaneous determination of vitamins A and D3 in fluid milk products.

    PubMed

    Chen, Yang; Reddy, Ravinder M; Li, Wenjing; Yettlla, Ramesh R; Lopez, Salvador; Woodman, Michael

    2015-01-01

    An HPLC method for simultaneous determination of vitamins A and D3 in fluid milk was developed and validated. Saponification and extraction conditions were studied for optimum recovery and simplicity. An RP HPLC system equipped with a C18 column and diode array detector was used for quantitation. The method was subjected to a single-laboratory validation using skim, 2% fat, and whole milk samples at concentrations of 50, 100, and 200% of the recommended fortification levels for vitamins A and D3 for Grade "A" fluid milk. The method quantitation limits for vitamins A and D3 were 0.0072 and 0.0026 μg/mL, respectively. Average recoveries between 94 and 110% and SD values ranging from 2.7 to 6.9% were obtained for both vitamins A and D3. The accuracy of the method was evaluated using a National Institute of Standards and Technology standard reference material (1849a) and proficiency test samples.

  17. Ring system-based chemical graph generation for de novo molecular design

    NASA Astrophysics Data System (ADS)

    Miyao, Tomoyuki; Kaneko, Hiromasa; Funatsu, Kimito

    2016-05-01

    Generating chemical graphs in silico by combining building blocks is important and fundamental in virtual combinatorial chemistry. A premise in this area is that generated structures should be irredundant as well as exhaustive. In this study, we develop structure generation algorithms regarding combining ring systems as well as atom fragments. The proposed algorithms consist of three parts. First, chemical structures are generated through a canonical construction path. During structure generation, ring systems can be treated as reduced graphs having fewer vertices than those in the original ones. Second, diversified structures are generated by a simple rule-based generation algorithm. Third, the number of structures to be generated can be estimated with adequate accuracy without actual exhaustive generation. The proposed algorithms were implemented in structure generator Molgilla. As a practical application, Molgilla generated chemical structures mimicking rosiglitazone in terms of a two dimensional pharmacophore pattern. The strength of the algorithms lies in simplicity and flexibility. Therefore, they may be applied to various computer programs regarding structure generation by combining building blocks.

  18. Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.

    PubMed

    Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P

    2015-01-01

    The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market. © 2015 Elsevier Inc. All rights reserved.

  19. A New Local Debonding Model with Application to the Transverse Tensile and Creep Behavior of Continuously Reinforced Titanium Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2000-01-01

    A new, widely applicable model for local interfacial debonding in composite materials is presented. Unlike its direct predecessors, the new model allows debonding to progress via unloading of interfacial stresses even as global loading of the composite continues. Previous debonding models employed for analysis of titanium matrix composites are surpassed by the accuracy, simplicity, and efficiency demonstrated by the new model. The new model was designed to operate seamlessly within NASA Glenn's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), which was employed to simulate the time- and rate-dependent (viscoplastic) transverse tensile and creep behavior of SiC/Ti composites. MAC/GMC's ability to simulate the transverse behavior of titanium matrix composites has been significantly improved by the new debonding model. Further, results indicate the need for a more accurate constitutive representation of the titanium matrix behavior in order to enable predictions of the composite transverse response, without resorting to recalibration of the debonding model parameters.

  20. Electroanalysis and laccase-based biosensor on the determination of phenolic content and antioxidant power of honey samples.

    PubMed

    de Oliveira Neto, Jerônimo Raimundo; Rezende, Stefani Garcia; Lobón, Gérman Sanz; Garcia, Telma Alves; Macedo, Isaac Yves Lopes; Garcia, Luane Ferreira; Alves, Virgínia Farias; Torres, Ieda Maria Sapateiro; Santiago, Mariângela Fontes; Schmidt, Fernando; de Souza Gil, Eric

    2017-12-15

    Honey is a functional food widely consumed. Thus, the evaluation of honey samples to determine its phenolic content and antioxidant capacity (AOC) is relevant to determine its quality. Usually AOC is performed by spectrophotometric methods, which lacks reproducibility and practicality. In this context, the electroanalytical methods offer higher simplicity and accuracy. Hence, the aim of this work was to use of electroanalytical tools and laccase based biosensor on the evaluation of AOC and total phenol content (TPC) of honey samples from different countries. The antioxidant power established by electrochemical index presented good correlation with the spectrophotometric FRAP (Ferric Reducing Ability of Plasma) and DPPH (2,2-Diphenyl-1-Picrylhydrazyl) radical scavenging assays. Also, TPC results obtained by the biosensor agreed with the Folin-Ciocalteu (FC) assay. In addition to the semi quantitative results, the electroanalysis offered qualitative parameters, which were useful to indicate the nature of major phenolic compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  2. Quantification of Finger-Tapping Angle Based on Wearable Sensors

    PubMed Central

    Djurić-Jovičić, Milica; Jovičić, Nenad S.; Roby-Brami, Agnes; Popović, Mirjana B.; Kostić, Vladimir S.; Djordjević, Antonije R.

    2017-01-01

    We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems. PMID:28125051

  3. Quantification of Finger-Tapping Angle Based on Wearable Sensors.

    PubMed

    Djurić-Jovičić, Milica; Jovičić, Nenad S; Roby-Brami, Agnes; Popović, Mirjana B; Kostić, Vladimir S; Djordjević, Antonije R

    2017-01-25

    We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems.

  4. Two-dimensional fracture analysis of piezoelectric material based on the scaled boundary node method

    NASA Astrophysics Data System (ADS)

    Shen-Shen, Chen; Juan, Wang; Qing-Hua, Li

    2016-04-01

    A scaled boundary node method (SBNM) is developed for two-dimensional fracture analysis of piezoelectric material, which allows the stress and electric displacement intensity factors to be calculated directly and accurately. As a boundary-type meshless method, the SBNM employs the moving Kriging (MK) interpolation technique to an approximate unknown field in the circumferential direction and therefore only a set of scattered nodes are required to discretize the boundary. As the shape functions satisfy Kronecker delta property, no special techniques are required to impose the essential boundary conditions. In the radial direction, the SBNM seeks analytical solutions by making use of analytical techniques available to solve ordinary differential equations. Numerical examples are investigated and satisfactory solutions are obtained, which validates the accuracy and simplicity of the proposed approach. Project supported by the National Natural Science Foundation of China (Grant Nos. 11462006 and 21466012), the Foundation of Jiangxi Provincial Educational Committee, China (Grant No. KJLD14041), and the Foundation of East China Jiaotong University, China (Grant No. 09130020).

  5. A simplified method for correcting contaminant concentrations in eggs for moisture loss.

    USGS Publications Warehouse

    Heinz, Gary H.; Stebbins, Katherine R.; Klimstra, Jon D.; Hoffman, David J.

    2009-01-01

    We developed a simplified and highly accurate method for correcting contaminant concentrations in eggs for the moisture that is lost from an egg during incubation. To make the correction, one injects water into the air cell of the egg until overflowing. The amount of water injected corrects almost perfectly for the amount of water lost during incubation or when an egg is left in the nest and dehydrates and deteriorates over time. To validate the new method we weighed freshly laid chicken (Gallus gallus) eggs and then incubated sets of fertile and dead eggs for either 12 or 19 d. We then injected water into the air cells of these eggs and verified that the weights after water injection were almost identical to the weights of the eggs when they were fresh. The advantages of the new method are its speed, accuracy, and simplicity: It does not require the calculation of a correction factor that has to be applied to each contaminant residue.

  6. Model of coordination melting of crystals and anisotropy of physical and chemical properties of the surface

    NASA Astrophysics Data System (ADS)

    Bokarev, Valery P.; Krasnikov, Gennady Ya

    2018-02-01

    Based on the evaluation of the properties of crystals, such as surface energy and its anisotropy, the surface melting temperature, the anisotropy of the work function of the electron, and the anisotropy of adsorption, were shown the advantages of the model of coordination melting (MCM) in calculating the surface properties of crystals. The model of coordination melting makes it possible to calculate with an acceptable accuracy the specific surface energy of the crystals, the anisotropy of the surface energy, the habit of the natural crystals, the temperature of surface melting of the crystal, the anisotropy of the electron work function and the anisotropy of the adhesive properties of single-crystal surfaces. The advantage of our model is the simplicity of evaluating the surface properties of the crystal based on the data given in the reference literature. In this case, there is no need for a complex mathematical tool, which is used in calculations using quantum chemistry or modeling by molecular dynamics.

  7. Video see-through augmented reality for oral and maxillofacial surgery.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2017-06-01

    Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration. A real-time markerless image registration method is proposed by integrating a shape matching method into a 2D tracking framework. The image registration is performed by matching the patient's teeth model with intraoperative video to obtain its pose. The resulting pose is used to overlay relevant models from the same CT space on the camera video for augmented reality. The proposed system was evaluated on mandible/maxilla phantoms, a volunteer and clinical data. Experimental results show that the target overlay error is about 1 mm, and the frame rate of registration update yields 3-5 frames per second with a 4 K camera. The significance of this work lies in its simplicity in clinical setting and the seamless integration into the current medical procedure with satisfactory response time and overlay accuracy. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. A simple mathematical model to predict sea surface temperature over the northwest Indian Ocean

    NASA Astrophysics Data System (ADS)

    Noori, Roohollah; Abbasi, Mahmud Reza; Adamowski, Jan Franklin; Dehghani, Majid

    2017-10-01

    A novel and simple mathematical model was developed in this study to enhance the capacity of a reduced-order model based on eigenvectors (RMEV) to predict sea surface temperature (SST) in the northwest portion of the Indian Ocean, including the Persian and Oman Gulfs and Arabian Sea. Developed using only the first two of 12,416 possible modes, the enhanced RMEV closely matched observed daily optimum interpolation SST (DOISST) values. Spatial distribution of the first mode indicated the greatest variations in DOISST occurred in the Persian Gulf. Also, the slightly increasing trend in the temporal component of the first mode observed in the study area over the last 34 years properly reflected the impact of climate change and rising DOISST. Given its simplicity and high level of accuracy, the enhanced RMEV can be applied to forecast DOISST in oceans, which the poor forecasting performance and large computational-time of other numerical models may not allow.

  9. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  10. Molecular Biosensors for Electrochemical Detection of Infectious Pathogens in Liquid Biopsies: Current Trends and Challenges.

    PubMed

    Campuzano, Susana; Yáñez-Sedeño, Paloma; Pingarrón, José Manuel

    2017-11-03

    Rapid and reliable diagnosis of infectious diseases caused by pathogens, and timely initiation of appropriate treatment are critical determinants to promote optimal clinical outcomes and general public health. Conventional in vitro diagnostics for infectious diseases are time-consuming and require centralized laboratories, experienced personnel and bulky equipment. Recent advances in electrochemical affinity biosensors have demonstrated to surpass conventional standards in regards to time, simplicity, accuracy and cost in this field. The tremendous potential offered by electrochemical affinity biosensors to detect on-site infectious pathogens at clinically relevant levels in scarcely treated body fluids is clearly stated in this review. The development and application of selected examples using different specific receptors, assay formats and electrochemical approaches focusing on the determination of specific circulating biomarkers of different molecular (genetic, regulatory and functional) levels associated with bacterial and viral pathogens are critically discussed. Existing challenges still to be addressed and future directions in this rapidly advancing and highly interesting field are also briefly pointed out.

  11. A reduction package for cross-dispersed echelle spectrograph data in IDL

    NASA Astrophysics Data System (ADS)

    Hall, Jeffrey C.; Neff, James E.

    1992-12-01

    We have written in IDL a data reduction package that performs reduction and extraction of cross-dispersed echelle spectrograph data. The present package includes a complete set of tools for extracting data from any number of spectral orders with arbitrary tilt and curvature. Essential elements include debiasing and flatfielding of the raw CCD image, removal of scattered light background, either nonoptimal or optimal extraction of data, and wavelength calibration and continuum normalization of the extracted orders. A growing set of support routines permits examination of the frame being processed to provide continuing checks on the statistical properties of the data and on the accuracy of the extraction. We will display some sample reductions and discuss the algorithms used. The inherent simplicity and user-friendliness of the IDL interface make this package a useful tool for spectroscopists. We will provide an email distribution list for those interested in receiving the package, and further documentation will be distributed at the meeting.

  12. A genetic-algorithm-based remnant grey prediction model for energy demand forecasting.

    PubMed

    Hu, Yi-Chung

    2017-01-01

    Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1) model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1) (GARGM(1,1)) with sign estimation to further improve the forecasting accuracy of the original GM(1,1) model. The distinctive feature of GARGM(1,1) is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1) outperforms other remnant GM(1,1) variants.

  13. A genetic-algorithm-based remnant grey prediction model for energy demand forecasting

    PubMed Central

    2017-01-01

    Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1) model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1) (GARGM(1,1)) with sign estimation to further improve the forecasting accuracy of the original GM(1,1) model. The distinctive feature of GARGM(1,1) is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1) outperforms other remnant GM(1,1) variants. PMID:28981548

  14. A fast Fourier transform on multipoles (FFTM) algorithm for solving Helmholtz equation in acoustics analysis.

    PubMed

    Ong, Eng Teo; Lee, Heow Pueh; Lim, Kian Meng

    2004-09-01

    This article presents a fast algorithm for the efficient solution of the Helmholtz equation. The method is based on the translation theory of the multipole expansions. Here, the speedup comes from the convolution nature of the translation operators, which can be evaluated rapidly using fast Fourier transform algorithms. Also, the computations of the translation operators are accelerated by using the recursive formulas developed recently by Gumerov and Duraiswami [SIAM J. Sci. Comput. 25, 1344-1381(2003)]. It is demonstrated that the algorithm can produce good accuracy with a relatively low order of expansion. Efficiency analyses of the algorithm reveal that it has computational complexities of O(Na), where a ranges from 1.05 to 1.24. However, this method requires substantially more memory to store the translation operators as compared to the fast multipole method. Hence, despite its simplicity in implementation, this memory requirement issue may limit the application of this algorithm to solving very large-scale problems.

  15. Coherence penalty functional: A simple method for adding decoherence in Ehrenfest dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimov, Alexey V., E-mail: alexvakimov@gmail.com, E-mail: oleg.prezhdo@rochester.edu; Chemistry Department, Brookhaven National Laboratory, Upton, New York 11973; Long, Run

    2014-05-21

    We present a new semiclassical approach for description of decoherence in electronically non-adiabatic molecular dynamics. The method is formulated on the grounds of the Ehrenfest dynamics and the Meyer-Miller-Thoss-Stock mapping of the time-dependent Schrödinger equation onto a fully classical Hamiltonian representation. We introduce a coherence penalty functional (CPF) that accounts for decoherence effects by randomizing the wavefunction phase and penalizing development of coherences in regions of strong non-adiabatic coupling. The performance of the method is demonstrated with several model and realistic systems. Compared to other semiclassical methods tested, the CPF method eliminates artificial interference and improves agreement with the fullymore » quantum calculations on the models. When applied to study electron transfer dynamics in the nanoscale systems, the method shows an improved accuracy of the predicted time scales. The simplicity and high computational efficiency of the CPF approach make it a perfect practical candidate for applications in realistic systems.« less

  16. The diffusion of evidence-based decision making among local health department practitioners in the United States.

    PubMed

    Harris, Jenine K; Erwin, Paul C; Smith, Carson; Brownson, Ross C

    2015-01-01

    Evidence-based decision making (EBDM) is the process, in local health departments (LHDs) and other settings, of translating the best available scientific evidence into practice. Local health departments are more likely to be successful if they use evidence-based strategies. However, EBDM and use of evidence-based strategies by LHDs are not widespread. Drawing on diffusion of innovations theory, we sought to understand how LHD directors and program managers perceive the relative advantage, compatibility, simplicity, and testability of EBDM. Directors and managers of programs in chronic disease, environmental health, and infectious disease from LHDs nationwide completed a survey including demographic information and questions about diffusion attributes (advantage, compatibility, simplicity, and testability) related to EBDM. Bivariate inferential tests were used to compare responses between directors and managers and to examine associations between participant characteristics and diffusion attributes. Relative advantage and compatibility scores were high for directors and managers, whereas simplicity and testability scores were lower. Although health department directors and managers of programs in chronic disease generally had higher scores than other groups, there were few significant or large differences between directors and managers across the diffusion attributes. Larger jurisdiction population size was associated with higher relative advantage and compatibility scores for both directors and managers. Overall, directors and managers were in strong agreement on the relative advantage of an LHD using EBDM, with directors in stronger agreement than managers. Perceived relative advantage has been demonstrated to be the most important factor in the rate of innovation adoption, suggesting an opportunity for directors to speed EBDM adoption. However, lower average scores across all groups for simplicity and testability may be hindering EBDM adoption. Recommended strategies for increasing perceived EBDM simplicity and testability are provided.

  17. Implementation and validation of an implant-based coordinate system for RSA migration calculation.

    PubMed

    Laende, Elise K; Deluzio, Kevin J; Hennigar, Allan W; Dunbar, Michael J

    2009-10-16

    An in vitro radiostereometric analysis (RSA) phantom study of a total knee replacement was carried out to evaluate the effect of implementing two new modifications to the conventional RSA procedure: (i) adding a landmark of the tibial component as an implant marker and (ii) defining an implant-based coordinate system constructed from implant landmarks for the calculation of migration results. The motivation for these two modifications were (i) to improve the representation of the implant by the markers by including the stem tip marker which increases the marker distribution (ii) to recover clinical RSA study cases with insufficient numbers of markers visible in the implant polyethylene and (iii) to eliminate errors in migration calculations due to misalignment of the anatomical axes with the RSA global coordinate system. The translational and rotational phantom studies showed no loss of accuracy with the two new measurement methods. The RSA system employing these methods has a precision of better than 0.05 mm for translations and 0.03 degrees for rotations, and an accuracy of 0.05 mm for translations and 0.15 degrees for rotations. These results indicate that the new methods to improve the interpretability, relevance, and standardization of the results do not compromise precision and accuracy, and are suitable for application to clinical data.

  18. Impact of transmission intensity on the accuracy of genotyping to distinguish recrudescence from new infection in antimalarial clinical trials.

    PubMed

    Greenhouse, Bryan; Dokomajilar, Christian; Hubbard, Alan; Rosenthal, Philip J; Dorsey, Grant

    2007-09-01

    Antimalarial clinical trials use genotyping techniques to distinguish new infection from recrudescence. In areas of high transmission, the accuracy of genotyping may be compromised due to the high number of infecting parasite strains. We compared the accuracies of genotyping methods, using up to six genotyping markers, to assign outcomes for two large antimalarial trials performed in areas of Africa with different transmission intensities. We then estimated the probability of genotyping misclassification and its effect on trial results. At a moderate-transmission site, three genotyping markers were sufficient to generate accurate estimates of treatment failure. At a high-transmission site, even with six markers, estimates of treatment failure were 20% for amodiaquine plus artesunate and 17% for artemether-lumefantrine, regimens expected to be highly efficacious. Of the observed treatment failures for these two regimens, we estimated that at least 45% and 35%, respectively, were new infections misclassified as recrudescences. Increasing the number of genotyping markers improved the ability to distinguish new infection from recrudescence at a moderate-transmission site, but using six markers appeared inadequate at a high-transmission site. Genotyping-adjusted estimates of treatment failure from high-transmission sites may represent substantial overestimates of the true risk of treatment failure.

  19. Unmitigated numerical solution to the diffraction term in the parabolic nonlinear ultrasound wave equation.

    PubMed

    Hasani, Mojtaba H; Gharibzadeh, Shahriar; Farjami, Yaghoub; Tavakkoli, Jahan

    2013-09-01

    Various numerical algorithms have been developed to solve the Khokhlov-Kuznetsov-Zabolotskaya (KZK) parabolic nonlinear wave equation. In this work, a generalized time-domain numerical algorithm is proposed to solve the diffraction term of the KZK equation. This algorithm solves the transverse Laplacian operator of the KZK equation in three-dimensional (3D) Cartesian coordinates using a finite-difference method based on the five-point implicit backward finite difference and the five-point Crank-Nicolson finite difference discretization techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in fewer calculation gridding nodes without compromising accuracy in the diffraction term. In addition, a new empirical algorithm based on the LU decomposition technique is proposed to solve the system of linear equations obtained from this discretization. The proposed empirical algorithm improves the calculation speed and memory usage, while the order of computational complexity remains linear in calculation of the diffraction term in the KZK equation. For evaluating the accuracy of the proposed algorithm, two previously published algorithms are used as comparison references: the conventional 2D Texas code and its generalization for 3D geometries. The results show that the accuracy/efficiency performance of the proposed algorithm is comparable with the established time-domain methods.

  20. The short-term effects of trigger point therapy, stretching and medicine ball exercises on accuracy and back swing hip turn in elite, male golfers - A randomised controlled trial.

    PubMed

    Quinn, Samantha-Lynn; Olivier, Benita; Wood, Wendy-Ann

    2016-11-01

    This study aimed to compare the effect of myofascial trigger point therapy (MTPT) and stretching, MTPT and medicine ball exercises, and no intervention, on hip flexor length (HFL), golf swing biomechanics and performance in elite, male golfers. Single blind, randomised controlled trial with two experimental groups (stretch group: MTPT and stretching; and the ball group: MTPT, a single stretch and medicine ball exercises) and one control group (no intervention). Professional golf academy. One hundred, elite, male golfers aged 16-25 years. HFL, 3D biomechanical analysis of the golf swing, club head speed (CHS), smash ratio, accuracy and distance at baseline and after the interventions. Backswing hip turn (BSHT) improved in the ball group relative to the control group (p = 0.0248). Accuracy in the ball group and the stretch group improved relative to the control group (Fisher's exact = 0.016). Other performance parameters such as: smash ratio, distance and CHS were not compromised by either intervention. This study advocates the use of MTPT combined with medicine ball exercises over MTPT combined with stretching in the treatment of golfers with shortened hip flexors - even immediately preceding a tournament. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. An optimized design to reduce eddy current sensitivity in velocity-selective arterial spin labeling using symmetric BIR-8 pulses.

    PubMed

    Guo, Jia; Meakin, James A; Jezzard, Peter; Wong, Eric C

    2015-03-01

    Velocity-selective arterial spin labeling (VSASL) tags arterial blood on a velocity-selective (VS) basis and eliminates the tagging/imaging gap and associated transit delay sensitivity observed in other ASL tagging methods. However, the flow-weighting gradient pulses in VS tag preparation can generate eddy currents (ECs), which may erroneously tag the static tissue and create artificial perfusion signal, compromising the accuracy of perfusion quantification. A novel VS preparation design is presented using an eight-segment B1 insensitive rotation with symmetric radio frequency and gradient layouts (sym-BIR-8), combined with delays after gradient pulses to optimally reduce ECs of a wide range of time constants while maintaining B0 and B1 insensitivity. Bloch simulation, phantom, and in vivo experiments were carried out to determine robustness of the new and existing pulse designs to ECs, B0 , and B1 inhomogeneity. VSASL with reduced EC sensitivity across a wide range of EC time constants was achieved with the proposed sym-BIR-8 design, and the accuracy of cerebral blood flow measurement was improved. The sym-BIR-8 design performed the most robustly among the existing VS tagging designs, and should benefit studies using VS preparation with improved accuracy and reliability. © 2014 Wiley Periodicals, Inc.

  2. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.

    PubMed

    Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T

    2016-03-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.

  3. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  4. Cognitive simplicity and self-deception are crucial in martyrdom and suicide terrorism.

    PubMed

    Fink, Bernhard; Trivers, Robert

    2014-08-01

    Suicide attacks and terrorism are characterized by cognitive simplicity, which is related to self-deception. In justifying violence in pursuit of ideologically and/or politically driven commitment, people with high religious commitment may be particularly prone to mechanisms of self-deception. Related megalomania and glorious self-perception are typical of self-deception, and are thus crucial in the emergence and expression of (suicide) terrorism.

  5. The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.

    PubMed

    Wixted, John T; Wells, Gary L

    2017-05-01

    The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).

  6. GNSS/Electronic Compass/Road Segment Information Fusion for Vehicle-to-Vehicle Collision Avoidance Application

    PubMed Central

    Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto

    2017-01-01

    The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision. PMID:29186851

  7. GNSS/Electronic Compass/Road Segment Information Fusion for Vehicle-to-Vehicle Collision Avoidance Application.

    PubMed

    Sun, Rui; Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto

    2017-11-25

    The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision.

  8. Impaired performance from brief social isolation of rhesus monkeys (Macaca mulatta) - A multiple video-task assessment

    NASA Technical Reports Server (NTRS)

    Washburn, David A.; Rumbaugh, Duane M.

    1991-01-01

    Social isolation has been demonstrated to produce profound and lasting psychological effects in young primates. In the present investigation, two adult rhesus monkeys (Macaca mulatta) were isolated from one another for up to 6 days and tested on 7 video tasks designed to assess psychomotor and cognitive functioning. Both the number and quality (i.e., speed and accuracy) of responses were significantly compromised in the social isolation condition relative to levels in which the animals were tested together. It is argued that adult rhesus are susceptible to performance disruption by even relatively brief social isolation, and that these effects can best be assessed by a battery of complex and sensitive measures.

  9. Revealing all: misleading self-disclosure rates in laboratory-based online research.

    PubMed

    Callaghan, Diana E; Graff, Martin G; Davies, Joanne

    2013-09-01

    Laboratory-based experiments in online self-disclosure research may be inadvertently compromising the accuracy of research findings by influencing some of the factors known to affect self-disclosure behavior. Disclosure-orientated interviews conducted with 42 participants in the laboratory and in nonlaboratory settings revealed significantly greater breadth of self-disclosure in laboratory interviews, with message length and intimacy of content also strongly related. These findings suggest that a contrived online setting with a researcher presence may stimulate motivation for greater self-disclosure than would occur naturally in an online environment of an individual's choice. The implications of these findings are that researchers should consider the importance of experimental context and motivation in self-disclosure research.

  10. [Data sources, the data used, and the modality for collection].

    PubMed

    Mercier, G; Costa, N; Dutot, C; Riche, V-P

    2018-03-01

    The hospital costing process implies access to various sources of data. Whether a micro-costing or a gross-costing approach is used, the choice of the methodology is based on a compromise between the cost of data collection, data accuracy, and data transferability. This work describes the data sources available in France and the access modalities that are used, as well as the main advantages and shortcomings of: (1) the local unit costs, (2) the hospital analytical accounting, (3) the Angers database, (4) the National Health Cost Studies, (5) the INTER CHR/U databases, (6) the Program for Medicalizing Information Systems, and (7) the public health insurance databases. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  11. GPU-accelerated computation of electron transfer.

    PubMed

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  12. A peptide-retrieval strategy enables significant improvement of quantitative performance without compromising confidence of identification.

    PubMed

    Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun

    2017-01-30

    Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less confidence matched to confident proteins were retrieved. However, the total peptide-spectrum-match false discovery rate (PSM FDR) after retrieval analysis was still controlled at a confident level of FDR≤1%. As expected, the penalty for occasionally incorporating incorrect peptide identifications is negligible by comparison with the improvements in quantitative performance. More quantifiable peptides, lower missing value rate, better quantitative accuracy and precision were significantly achieved for the same protein identifications by this simple strategy. This strategy is theoretically applicable for any quantitative approaches in proteomics and thereby provides more quantitative information, especially on low-abundance proteins. Published by Elsevier B.V.

  13. 45 CFR 30.22 - Bases for compromise.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Bases for compromise. 30.22 Section 30.22 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CLAIMS COLLECTION Debt Compromise § 30.22 Bases for compromise. (a) Compromise. The Secretary may compromise a debt if the full amount...

  14. 45 CFR 30.22 - Bases for compromise.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Bases for compromise. 30.22 Section 30.22 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CLAIMS COLLECTION Debt Compromise § 30.22 Bases for compromise. (a) Compromise. The Secretary may compromise a debt if the full amount...

  15. 45 CFR 30.22 - Bases for compromise.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Bases for compromise. 30.22 Section 30.22 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CLAIMS COLLECTION Debt Compromise § 30.22 Bases for compromise. (a) Compromise. The Secretary may compromise a debt if the full amount...

  16. 45 CFR 30.22 - Bases for compromise.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Bases for compromise. 30.22 Section 30.22 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CLAIMS COLLECTION Debt Compromise § 30.22 Bases for compromise. (a) Compromise. The Secretary may compromise a debt if the full amount...

  17. Collusion-Aware Privacy-Preserving Range Query in Tiered Wireless Sensor Networks†

    PubMed Central

    Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping

    2014-01-01

    Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromised master nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals. PMID:25615731

  18. Metacognitive impairment in active cocaine use disorder is associated with individual differences in brain structure

    PubMed Central

    Moeller, Scott J.; Fleming, Stephen M.; Gan, Gabriela; Zilverstand, Anna; Malaker, Pias; Uquillas, Federico d’Oleire; Schneider, Kristin E.; Preston-Campbell, Rebecca; Parvaz, Muhammad A.; Maloney, Thomas; Alia-Klein, Nelly; Goldstein, Rita Z.

    2016-01-01

    Dysfunctional self-awareness has been posited as a key feature of drug addiction, contributing to compromised control over addictive behaviors. In the present investigation, we showed that, compared with healthy controls (n=13) and even individuals with remitted cocaine use disorder (n=14), individuals with active cocaine use disorder (n=8) exhibited deficits in basic metacognition, defined as a weaker link between objective performance and self-reported confidence of performance on a visuo-perceptual accuracy task. This metacognitive deficit was accompanied by gray matter volume decreases, also most pronounced in individuals with active cocaine use disorder, in the rostral anterior cingulate cortex, a region necessary for this function in health. Our results thus provide a direct unbiased measurement – not relying on long-term memory or multifaceted choice behavior – of metacognition deficits in drug addiction, which are further mapped onto structural deficits in a brain region that subserves metacognitive accuracy in health and self-awareness in drug addiction. Impairments of metacognition could provide a basic mechanism underlying the higher-order self-awareness deficits in addiction, particularly among recent, active users. PMID:26948669

  19. Assessment of female breast dose for thoracic cone-beam CT using MOSFET dosimeters.

    PubMed

    Sun, Wenzhao; Wang, Bin; Qiu, Bo; Liang, Jian; Xie, Weihao; Deng, Xiaowu; Qi, Zhenyu

    2017-03-21

    To assess the breast dose during a routine thoracic cone-beam CT (CBCT) check with the efforts to explore the possible dose reduction strategy. Metal oxide semiconductor field-effect transistor (MOSFET) dosimeters were used to measure breast surface doses during a thorax kV CBCT scan in an anthropomorphic phantom. Breast doses for different scanning protocols and breast sizes were compared. Dose reduction was attempted by using partial arc CBCT scan with bowtie filter. The impact of this dose reduction strategy on image registration accuracy was investigated. The average breast surface doses were 20.02 mGy and 11.65 mGy for thoracic CBCT without filtration and with filtration, respectively. This indicates a dose reduction of 41.8% by use of bowtie filter. It was found 220° partial arc scanning significantly reduced the dose to contralateral breast (44.4% lower than ipsilateral breast), while the image registration accuracy was not compromised. Breast dose reduction can be achieved by using ipsilateral 220° partial arc scan with bowtie filter. This strategy also provides sufficient image quality for thorax image registration in daily patient positioning verification.

  20. Methods for assessment of keel bone damage in poultry.

    PubMed

    Casey-Trott, T; Heerkens, J L T; Petrik, M; Regmi, P; Schrader, L; Toscano, M J; Widowski, T

    2015-10-01

    Keel bone damage (KBD) is a critical issue facing the laying hen industry today as a result of the likely pain leading to compromised welfare and the potential for reduced productivity. Recent reports suggest that damage, while highly variable and likely dependent on a host of factors, extends to all systems (including battery cages, furnished cages, and non-cage systems), genetic lines, and management styles. Despite the extent of the problem, the research community remains uncertain as to the causes and influencing factors of KBD. Although progress has been made investigating these factors, the overall effort is hindered by several issues related to the assessment of KBD, including quality and variation in the methods used between research groups. These issues prevent effective comparison of studies, as well as difficulties in identifying the presence of damage leading to poor accuracy and reliability. The current manuscript seeks to resolve these issues by offering precise definitions for types of KBD, reviewing methods for assessment, and providing recommendations that can improve the accuracy and reliability of those assessments. © 2015 Poultry Science Association Inc.

  1. An adaptive reconstruction for Lagrangian, direct-forcing, immersed-boundary methods

    NASA Astrophysics Data System (ADS)

    Posa, Antonio; Vanella, Marcos; Balaras, Elias

    2017-12-01

    Lagrangian, direct-forcing, immersed boundary (IB) methods have been receiving increased attention due to their robustness in complex fluid-structure interaction problems. They are very sensitive, however, on the selection of the Lagrangian grid, which is typically used to define a solid or flexible body immersed in a fluid flow. In the present work we propose a cost-efficient solution to this problem without compromising accuracy. Central to our approach is the use of isoparametric mapping to bridge the relative resolution requirements of Lagrangian IB, and Eulerian grids. With this approach, the density of surface Lagrangian markers, which is essential to properly enforce boundary conditions, is adapted dynamically based on the characteristics of the underlying Eulerian grid. The markers are not stored and the Lagrangian data-structure is not modified. The proposed scheme is implemented in the framework of a moving least squares reconstruction formulation, but it can be adapted to any Lagrangian, direct-forcing formulation. The accuracy and robustness of the approach is demonstrated in a variety of test cases of increasing complexity.

  2. Sex assessment using measurements of the first lumbar vertebra.

    PubMed

    Zheng, Wen Xu; Cheng, Fu Bo; Cheng, Kai Liang; Tian, Yong; Lai, Ying; Zhang, Wen Song; Zheng, Ya Juan; Li, You Qiong

    2012-06-10

    Sex determination is a vital part of the medico-legal system but can be difficult in cases where the integrity of the body has been compromised. The purpose of this study was to develop a technique for sex assessment from measurements of the first lumber vertebrate. Twenty-nine linear measurements and five ratios were collected from 113 Chinese adult males and 97 Chinese adult females using digital three-dimensional anthropometry methods. By using discriminant analysis, we found that 23 linear measurements and two ratios identified sexual dimorphism (P<0.01), with predictive accuracy ranging from 57.1% to 86.6%. Using a stepwise method of discriminant function analysis, we found three dimensions predicted sex with 88.6% accuracy: (a) upper end-plate width (EPWu), (b) left pedicle height (PHl), and (c) middle end-plate depth (EPDm). This study shows that a single first lumber vertebra can be used for this purpose, and that the discriminant equation will help forensic determination of sex in the Chinese population. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Changes in combat task performance under increasing loads in active duty marines.

    PubMed

    Jaworski, Rebecca L; Jensen, Andrew; Niederberger, Brenda; Congalton, Robert; Kelly, Karen R

    2015-03-01

    U.S. Marines perform mission tasks under heavy loads which may compromise performance of combat tasks. However, data supporting this performance decrement are limited. The aim of this study was to determine the effects of load on performance of combat-related tasks. Subjects (N=18) ran a modified Maneuver Under Fire ([MANUF], 300 yards [yd] total: two 25-yd sprints, 25-yd crawl, 75-yd casualty drag, 150-yd ammunition can carry, and grenade toss) portion of the U.S. Marine Corps Combat Fitness Test under 4 trial conditions: neat (no load), 15%, 30%, and 45% of body weight, with a shooting task pre- and post-trial. There was a significant increase in total time to completion as a function of load (p<0.0001) with a relationship between load and time (r=0.592, p<0.0001). Pre- to post-MANUF shot accuracy (p=0.005) and precision (p<0.0001) was reduced. Short aerobic performance is significantly impacted by increasing loads. Marksmanship is compromised as a function of fatigue and load. These data suggest that loads of 45% body weight increase time to cover distance and reduce the ability to precisely hit a target. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  4. Psychomotor Impairment Detection via Finger Interactions with a Computer Keyboard During Natural Typing

    NASA Astrophysics Data System (ADS)

    Giancardo, L.; Sánchez-Ferro, A.; Butterworth, I.; Mendoza, C. S.; Hooker, J. M.

    2015-04-01

    Modern digital devices and appliances are capable of monitoring the timing of button presses, or finger interactions in general, with a sub-millisecond accuracy. However, the massive amount of high resolution temporal information that these devices could collect is currently being discarded. Multiple studies have shown that the act of pressing a button triggers well defined brain areas which are known to be affected by motor-compromised conditions. In this study, we demonstrate that the daily interaction with a computer keyboard can be employed as means to observe and potentially quantify psychomotor impairment. We induced a psychomotor impairment via a sleep inertia paradigm in 14 healthy subjects, which is detected by our classifier with an Area Under the ROC Curve (AUC) of 0.93/0.91. The detection relies on novel features derived from key-hold times acquired on standard computer keyboards during an uncontrolled typing task. These features correlate with the progression to psychomotor impairment (p < 0.001) regardless of the content and language of the text typed, and perform consistently with different keyboards. The ability to acquire longitudinal measurements of subtle motor changes from a digital device without altering its functionality may allow for early screening and follow-up of motor-compromised neurodegenerative conditions, psychological disorders or intoxication at a negligible cost in the general population.

  5. 24 CFR 17.73 - Standards for compromise of claims.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Standards for compromise of claims... General Provisions § 17.73 Standards for compromise of claims. (a) Compromise offer. An offer to...) Documentary evidence of compromise. No compromise of a claim shall be final or binding on the Department...

  6. Development and validation of a web-based questionnaire for surveying the health and working conditions of high-performance marine craft populations.

    PubMed

    de Alwis, Manudul Pahansen; Lo Martire, Riccardo; Äng, Björn O; Garme, Karl

    2016-06-20

    High-performance marine craft crews are susceptible to various adverse health conditions caused by multiple interactive factors. However, there are limited epidemiological data available for assessment of working conditions at sea. Although questionnaire surveys are widely used for identifying exposures, outcomes and associated risks with high accuracy levels, until now, no validated epidemiological tool exists for surveying occupational health and performance in these populations. To develop and validate a web-based questionnaire for epidemiological assessment of occupational and individual risk exposure pertinent to the musculoskeletal health conditions and performance in high-performance marine craft populations. A questionnaire for investigating the association between work-related exposure, performance and health was initially developed by a consensus panel under four subdomains, viz. demography, lifestyle, work exposure and health and systematically validated by expert raters for content relevance and simplicity in three consecutive stages, each iteratively followed by a consensus panel revision. The item content validity index (I-CVI) was determined as the proportion of experts giving a rating of 3 or 4. The scale content validity index (S-CVI/Ave) was computed by averaging the I-CVIs for the assessment of the questionnaire as a tool. Finally, the questionnaire was pilot tested. The S-CVI/Ave increased from 0.89 to 0.96 for relevance and from 0.76 to 0.94 for simplicity, resulting in 36 items in the final questionnaire. The pilot test confirmed the feasibility of the questionnaire. The present study shows that the web-based questionnaire fulfils previously published validity acceptance criteria and is therefore considered valid and feasible for the empirical surveying of epidemiological aspects among high-performance marine craft crews and similar populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. [Sensitive Determination of Chondroitin Sulfate by Fluorescence Recovery of an Anionic Aluminum Phthalocyanine-Cationic Surfactant Ion-Association Complex Used as a Fluorescent Probe Emitting at Red Region].

    PubMed

    Chen, Lin; Huang, Ping; Yang, Hui-qing; Deng, Ya-bin; Guo, Meng-lin; Li, Dong-hui

    2015-08-01

    Determination of chondroitin sulfate in the biomedical field has an important value. The conventional methods for the assay of chondroitin sulfate are still unsatisfactory in sensitivity, selectivity or simplicity. This work aimed at developing a novel method for sensitive and selective determination of chondroitin sulfate by fluorimetry. We found that some kinds of cationic surfactants have the ability to quench the fluorescence of tetrasulfonated aluminum phthalocyanine (AlS4Pc), a strongly fluorescent compound which emits at red region, with high efficiency. But, the fluorescence of the above-mentioned fluorescence quenching system recovered significantly when chondroitin sulfate (CS) exits. Tetradecyl dimethyl benzyl ammonium chloride(TDBAC) which was screened from all of the candidates of cationic surfactants was chosen as the quencher because it shows the most efficient quenching effect. It was found that the fluorescence of AlS4Pc was extremely quenched by TDBAC because of the formation of association complex between AlS4Pc and TDBAC. Fluorescence of the association complex recovered dramatically after the addition of chondroitin sulfate (CS) due to the ability of chondroitin sulfate to shift the association equilibrium of the association, leading to the release of AlS4Pc, thus resulting in an increase in the fluorescence of the reaction system. Based on this phenomenon, a novel method with simplicity, accuracy and sensitivity was developed for quantitative determination of CS. Factors including the reaction time, influencing factors and the effect of coexisting substances were investigated and discussed. Under optimum conditions the linear range of the calibration curve was 0.20~10.0 μg · mL(-1). The detection limit for CS was 0.070 μg · mL(-1). The method has been applied to the analysis of practical samples with satisfied results. This work expands the applications of AlS4Pc in biomedical area.

  8. Functional evaluation of out-of-the-box text-mining tools for data-mining tasks

    PubMed Central

    Jung, Kenneth; LePendu, Paea; Iyer, Srinivasan; Bauer-Mehren, Anna; Percha, Bethany; Shah, Nigam H

    2015-01-01

    Objective The trade-off between the speed and simplicity of dictionary-based term recognition and the richer linguistic information provided by more advanced natural language processing (NLP) is an area of active discussion in clinical informatics. In this paper, we quantify this trade-off among text processing systems that make different trade-offs between speed and linguistic understanding. We tested both types of systems in three clinical research tasks: phase IV safety profiling of a drug, learning adverse drug–drug interactions, and learning used-to-treat relationships between drugs and indications. Materials We first benchmarked the accuracy of the NCBO Annotator and REVEAL in a manually annotated, publically available dataset from the 2008 i2b2 Obesity Challenge. We then applied the NCBO Annotator and REVEAL to 9 million clinical notes from the Stanford Translational Research Integrated Database Environment (STRIDE) and used the resulting data for three research tasks. Results There is no significant difference between using the NCBO Annotator and REVEAL in the results of the three research tasks when using large datasets. In one subtask, REVEAL achieved higher sensitivity with smaller datasets. Conclusions For a variety of tasks, employing simple term recognition methods instead of advanced NLP methods results in little or no impact on accuracy when using large datasets. Simpler dictionary-based methods have the advantage of scaling well to very large datasets. Promoting the use of simple, dictionary-based methods for population level analyses can advance adoption of NLP in practice. PMID:25336595

  9. New stimulation pattern design to improve P300-based matrix speller performance at high flash rate

    NASA Astrophysics Data System (ADS)

    Polprasert, Chantri; Kukieattikool, Pratana; Demeechai, Tanee; Ritcey, James A.; Siwamogsatham, Siwaruk

    2013-06-01

    Objective. We propose a new stimulation pattern design for the P300-based matrix speller aimed at increasing the minimum target-to-target interval (TTI). Approach. Inspired by the simplicity and strong performance of the conventional row-column (RC) stimulation, the proposed stimulation is obtained by modifying the RC stimulation through alternating row and column flashes which are selected based on the proposed design rules. The second flash of the double-flash components is then delayed for a number of flashing instants to increase the minimum TTI. The trade-off inherited in this approach is the reduced randomness within the stimulation pattern. Main results. We test the proposed stimulation pattern and compare its performance in terms of selection accuracy, raw and practical bit rates with the conventional RC flashing paradigm over several flash rates. By increasing the minimum TTI within the stimulation sequence, the proposed stimulation has more event-related potentials that can be identified compared to that of the conventional RC stimulations, as the flash rate increases. This leads to significant performance improvement in terms of the letter selection accuracy, the raw and practical bit rates over the conventional RC stimulation. Significance. These studies demonstrate that significant performance improvement over the RC stimulation is obtained without additional testing or training samples to compensate for low P300 amplitude at high flash rate. We show that our proposed stimulation is more robust to reduced signal strength due to the increased flash rate than the RC stimulation.

  10. In-situ hydrogen in metal determination using a minimum neutron source strength and exposure time.

    PubMed

    Hatem, M; Agamy, S; Khalil, M Y

    2013-08-01

    Water is frequently present in the environment and is a source of hydrogen that can interact with many materials. Because of its small atomic size, a hydrogen atom can easily diffuse into a host metal, and though the metal may appear unchanged for a time, the metal will eventually abruptly lose its strength and ductility. Thus, measuring the hydrogen content in metals is important in many fields, such as in the nuclear industry, in automotive and aircraft fabrication, and particularly, in offshore oil and gas fields. It has been demonstrated that the use of nuclear methods to measure the hydrogen content in metals can achieve sensitivity levels on the order of parts per million. However, the use of nuclear methods in the field has not been conducted for two reasons. The first reason is due to exposure limitations. The second reason is due to the hi-tech instruments required for better accuracy. In this work, a new method using a low-strength portable neutron source is explored in conjunction with detectors based on plastic nuclear detection films. The following are the in-situ requirements: simplicity in setup, high reliability, minimal exposure dose, and acceptable accuracy at an acceptable cost. A computer model of the experimental setup is used to reproduce the results of a proof-of-concept experiment and to predict the sensitivity levels under optimised experimental conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Bedside end-tidal CO2 tension as a screening tool to exclude pulmonary embolism.

    PubMed

    Hemnes, A R; Newman, A L; Rosenbaum, B; Barrett, T W; Zhou, C; Rice, T W; Newman, J H

    2010-04-01

    End tidal carbon dioxide tension (P(ET,CO(2))) is a surrogate for dead space ventilation which may be useful in the evaluation of pulmonary embolism (PE). We aimed to define the optimal P(ET,CO(2)) level to exclude PE in patients evaluated for possible thromboembolism. 298 patients were enrolled over 6 months at a single academic centre. P(ET,CO(2)) was measured within 24 h of contrast-enhanced helical computed tomography, lower extremity duplex or ventilation/perfusion scan. Performance characteristics were measured by comparing test results with clinical diagnosis of PE. PE was diagnosed in 39 (13%) patients. Mean P( ET,CO(2)) in healthy volunteers did not differ from P( ET,CO(2)) in patients without PE (36.3+/-2.8 versus 35.5+/-6.8 mmHg). P(ET,CO(2 )) in patients with PE was 30.5+/-5.5 mmHg (p<0.001 versus patients without PE). A P(ET,CO(2)) of >or=36 mmHg had optimal sensitivity and specificity (87.2 and 53.0%, respectively) with a negative predictive value of 96.6% (95% CI 92.3-98.5). This increased to 97.6% (95% CI 93.2-99.) when combined with Wells score <4. A P(ET,CO(2)) of >or=36 mmHg may reliably exclude PE. Accuracy is augmented by combination with Wells score. P( ET,CO(2)) should be prospectively compared to D-dimer in accuracy and simplicity to exclude PE.

  12. Application of kinetic flux vector splitting scheme for solving multi-dimensional hydrodynamical models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  13. Sensitive spectrophotometric determination of aceclofenac following azo dye formation with 4-carboxyl-2,6-dinitrobenzene diazonium ion.

    PubMed

    Aderibigbe, Segun A; Adegoke, Olajire A; Idowu, Olakunle S; Olaleye, Sefiu O

    2012-01-01

    The study is a description of a sensitive spectrophotometric determination of aceclofenac following azo dye formation with 4-carboxyl-2,6-dinitrobenzenediazonium ion (CDNBD). Spot test and thin layer chromatography revealed the formation of a new compound distinct from CDNBD and aceclofenac. Optimization studies established a reaction time of 5 min at 30 degrees C after vortex mixing the drug/CDNBD for 10 s. An absorption maximum of 430 nm was selected as analytical wavelength. A linear response was observed over 1.2-4.8 μg/mL of aceclofenac with a correlation coefficient of 0.9983 and the drug combined with CDNBD at stoichiometric ratio of 2 : 1. The method has a limit of detection of 0.403 μg/mL, limit of quantitation of 1.22 μg/mL and is reproducible over a three day assessment. The method gave Sandell's sensitivity of 3.279 ng/cm2. Intra- and inter-day accuracies (in terms of errors) were less than 6% while precisions were of the order of 0.03-1.89% (RSD). The developed spectrophotometric method is of equivalent accuracy (p > 0.05) with British Pharmacopoeia, 2010 potentiometric method. It has the advantages of speed, simplicity, sensitivity and more affordable instrumentation and could found application as a rapid and sensitive analytical method of aceclofenac. It is the first described method by azo dye derivatization for the analysis of aceclofenac in bulk samples and dosage forms.

  14. The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation

    NASA Astrophysics Data System (ADS)

    Lofverstrom, Marcus; Liakka, Johan

    2018-04-01

    Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.

  15. Poster — Thur Eve — 40: Automated Quality Assurance for Remote-Afterloading High Dose Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Anthony; Ravi, Ananth

    2014-08-15

    High dose rate (HDR) remote afterloading brachytherapy involves sending a small, high-activity radioactive source attached to a cable to different positions within a hollow applicator implanted in the patient. It is critical that the source position within the applicator and the dwell time of the source are accurate. Daily quality assurance (QA) tests of the positional and dwell time accuracy are essential to ensure that the accuracy of the remote afterloader is not compromised prior to patient treatment. Our centre has developed an automated, video-based QA system for HDR brachytherapy that is dramatically superior to existing diode or film QAmore » solutions in terms of cost, objectivity, positional accuracy, with additional functionalities such as being able to determine source dwell time and transit time of the source. In our system, a video is taken of the brachytherapy source as it is sent out through a position check ruler, with the source visible through a clear window. Using a proprietary image analysis algorithm, the source position is determined with respect to time as it moves to different positions along the check ruler. The total material cost of the video-based system was under $20, consisting of a commercial webcam and adjustable stand. The accuracy of the position measurement is ±0.2 mm, and the time resolution is 30 msec. Additionally, our system is capable of robustly verifying the source transit time and velocity (a test required by the AAPM and CPQR recommendations), which is currently difficult to perform accurately.« less

  16. Preserved strategic grain-size regulation in memory reporting in patients with schizophrenia.

    PubMed

    Akdogan, Elçin; Izaute, Marie; Bacon, Elisabeth

    2014-07-15

    Cognitive and introspection disturbances are considered core features of schizophrenia. In real life, people are usually free to choose which aspects of an event they recall, how much detail to volunteer, and what degree of confidence to impart. Their decision will depend on various situational and personal goals. The authors explored whether schizophrenia patients are able to achieve a compromise between accuracy and informativeness when reporting semantic information. Twenty-five patients and 23 healthy matched control subjects answered general knowledge questions requiring numerical answers (how high is the Eiffel tower?), freely at first and then through a metamemory-based control. In the second phase, they answered with respect to two predefined intervals, one narrow and one broad; attributed a confidence judgment to both answers; and afterward selected one of the two answers. Data were analyzed using analyses of variance with group as the between-subjects factor. Patients reported information at a self-paced level of precision less accurately than healthy participants. However, they benefited remarkably from the framing of the response and from the metamemory processes of monitoring and control to the point of improving their memory reporting and matching healthy subjects' accuracy. In spite of their memory deficit during free reporting, after accuracy monitoring, patients strategically regulated the grain size of their memory reporting and proved able to manage the competing goals of accuracy and informativeness. These results give some cause for optimism as to the possibility for patients to adapt to everyday life situations. © 2013 Society of Biological Psychiatry Published by Society of Biological Psychiatry All rights reserved.

  17. Large N Limits in Tensor Models: Towards More Universality Classes of Colored Triangulations in Dimension d≥2

    NASA Astrophysics Data System (ADS)

    Bonzom, Valentin

    2016-07-01

    We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum field theory.

  18. One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2016-10-01

    The outcome of a detailed assessment of various strategies for atlas-based whole-body bone segmentation from magnetic resonance imaging (MRI) was exploited to select the optimal parameters and setting, with the aim of proposing a novel one-registration multi-atlas (ORMA) pseudo-CT generation approach. The proposed approach consists of only one online registration between the target and reference images, regardless of the number of atlas images (N), while for the remaining atlas images, the pre-computed transformation matrices to the reference image are used to align them to the target image. The performance characteristics of the proposed method were evaluated and compared with conventional atlas-based attenuation map generation strategies (direct registration of the entire atlas images followed by voxel-wise weighting (VWW) and arithmetic averaging atlas fusion). To this end, four different positron emission tomography (PET) attenuation maps were generated via arithmetic averaging and VWW scheme using both direct registration and ORMA approaches as well as the 3-class attenuation map obtained from the Philips Ingenuity TF PET/MRI scanner commonly used in the clinical setting. The evaluation was performed based on the accuracy of extracted whole-body bones by the different attenuation maps and by quantitative analysis of resulting PET images compared to CT-based attenuation-corrected PET images serving as reference. The comparison of validation metrics regarding the accuracy of extracted bone using the different techniques demonstrated the superiority of the VWW atlas fusion algorithm achieving a Dice similarity measure of 0.82 ± 0.04 compared to arithmetic averaging atlas fusion (0.60 ± 0.02), which uses conventional direct registration. Application of the ORMA approach modestly compromised the accuracy, yielding a Dice similarity measure of 0.76 ± 0.05 for ORMA-VWW and 0.55 ± 0.03 for ORMA-averaging. The results of quantitative PET analysis followed the same trend with less significant differences in terms of SUV bias, whereas massive improvements were observed compared to PET images corrected for attenuation using the 3-class attenuation map. The maximum absolute bias achieved by VWW and VWW-ORMA methods was 06.4 ± 5.5 in the lung and 07.9 ± 4.8 in the bone, respectively. The proposed algorithm is capable of generating decent attenuation maps. The quantitative analysis revealed a good correlation between PET images corrected for attenuation using the proposed pseudo-CT generation approach and the corresponding CT images. The computational time is reduced by a factor of 1/N at the expense of a modest decrease in quantitative accuracy, thus allowing us to achieve a reasonable compromise between computing time and quantitative performance.

  19. NOTE: Implementation of angular response function modeling in SPECT simulations with GATE

    NASA Astrophysics Data System (ADS)

    Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.

    2010-05-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.

  20. Leveraging transcript quantification for fast computation of alternative splicing profiles.

    PubMed

    Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo

    2015-09-01

    Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  1. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  2. UNDERSTANDING HOW PLANETS BECOME MASSIVE. I. DESCRIPTION AND VALIDATION OF A NEW TOY MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ormel, C. W.; Kobayashi, H., E-mail: ormel@astro.berkeley.edu, E-mail: hkobayas@nagoya-u.ac.jp

    2012-03-10

    The formation of giant planets requires the accumulation of {approx}10 Earth masses in solids; but how do protoplanets acquire their mass? There are many, often competing, processes that regulate the accretion rate of protoplanets. To assess their effects we present a new, publicly available toy model. The rationale behind the toy model is that it encompasses as many physically relevant processes as possible, but at the same time does not compromise its simplicity, speed, and physical insight. The toy model follows a modular structure, where key features-e.g., planetesimal fragmentation, radial orbital decay, nebula turbulence-can be switched on or off. Ourmore » model assumes three discrete components (fragments, planetesimals, and embryos) and is zero dimensional in space. We have tested the outcomes of the toy model against literature results and generally find satisfactory agreement. We include, for the first time, model features that capture the three-way interactions among small particles, gas, and protoplanets. Collisions among planetesimals will result in fragmentation, transferring a substantial amount of the solid mass to small particles, which couple strongly to the gas. Our results indicate that the efficiency of the accretion process then becomes very sensitive to the gas properties-especially to the turbulent state and the magnitude of the disk headwind (the decrease of the orbital velocity of the gas with respect to Keplerian)-as well as to the characteristic fragment size.« less

  3. A mixed-order nonlinear diffusion compressed sensing MR image reconstruction.

    PubMed

    Joy, Ajin; Paul, Joseph Suresh

    2018-03-07

    Avoid formation of staircase artifacts in nonlinear diffusion-based MR image reconstruction without compromising computational speed. Whereas second-order diffusion encourages the evolution of pixel neighborhood with uniform intensities, fourth-order diffusion considers smooth region to be not necessarily a uniform intensity region but also a planar region. Therefore, a controlled application of fourth-order diffusivity function is used to encourage second-order diffusion to reconstruct the smooth regions of the image as a plane rather than a group of blocks, while not being strong enough to introduce the undesirable speckle effect. Proposed method is compared with second- and fourth-order nonlinear diffusion reconstruction, total variation (TV), total generalized variation, and higher degree TV using in vivo data sets for different undersampling levels with application to dictionary learning-based reconstruction. It is observed that the proposed technique preserves sharp boundaries in the image while preventing the formation of staircase artifacts in the regions of smoothly varying pixel intensities. It also shows reduced error measures compared with second-order nonlinear diffusion reconstruction or TV and converges faster than TV-based methods. Because nonlinear diffusion is known to be an effective alternative to TV for edge-preserving reconstruction, the crucial aspect of staircase artifact removal is addressed. Reconstruction is found to be stable for the experimentally determined range of fourth-order regularization parameter, and therefore not does not introduce a parameter search. Hence, the computational simplicity of second-order diffusion is retained. © 2018 International Society for Magnetic Resonance in Medicine.

  4. A novel v- silicone vestibular stent: preventing vestibular stenosis and preserving nasal valves.

    PubMed

    Bassam, Wameedh Al; Bhargava, Deepa; Al-Abri, Rashid

    2012-01-01

    This report presents a novel style of placing nasal stents. Patients undergoing surgical procedures in the region of nasal vestibule and nasal valves are at risk of developing vestibular stenosis and lifelong problems with the external and internal nasal valves; sequels of the repair. The objective of the report is to demonstrate a simple and successful method of an inverted V- Stent placement to prevent potential complication of vestibular stenosis and nasal valve compromise later in life. Following a fall on a sharp edge of a metallic bed, a sixteen month old child with a deep lacerated nasal wound extending from the collumellar base toward the tip of the nose underwent surgical exploration and repair of the nasal vestibule and nasal cavity. A soft silicone stent fashioned as inverted V was placed bilaterally. The child made a remarkable recovery with no evidence of vestibular stenosis or nasal valve abnormalities. In patients with nasal trauma involving the nasal vestibule and internal and external nasal valves stent placement avoids sequels, adhesions, contractures, synechia vestibular stenosis and fibrosis involving these anatomical structures. The advantages of the described V- stents over the traditional readymade ridged nasal stents, tubing's and composite aural grafts are: a) technical simplicity of use, b) safety, c) less morbidity, d) more comfortable, and e) economical. To our knowledge, this is the first report of such a stent for prevention of vestibular stenosis and preserving nasal valves.

  5. A Novel V- Silicone Vestibular Stent: Preventing Vestibular Stenosis and Preserving Nasal Valves

    PubMed Central

    Bassam, Wameedh AL; Bhargava, Deepa; Al-Abri, Rashid

    2012-01-01

    This report presents a novel style of placing nasal stents. Patients undergoing surgical procedures in the region of nasal vestibule and nasal valves are at risk of developing vestibular stenosis and lifelong problems with the external and internal nasal valves; sequels of the repair. The objective of the report is to demonstrate a simple and successful method of an inverted V- Stent placement to prevent potential complication of vestibular stenosis and nasal valve compromise later in life. Following a fall on a sharp edge of a metallic bed, a sixteen month old child with a deep lacerated nasal wound extending from the collumellar base toward the tip of the nose underwent surgical exploration and repair of the nasal vestibule and nasal cavity. A soft silicone stent fashioned as inverted V was placed bilaterally. The child made a remarkable recovery with no evidence of vestibular stenosis or nasal valve abnormalities. In patients with nasal trauma involving the nasal vestibule and internal and external nasal valves stent placement avoids sequels, adhesions, contractures, synechia vestibular stenosis and fibrosis involving these anatomical structures. The advantages of the described V- stents over the traditional readymade ridged nasal stents, tubing’s and composite aural grafts are: a) technical simplicity of use, b) safety, c) less morbidity, d) more comfortable, and e) economical. To our knowledge, this is the first report of such a stent for prevention of vestibular stenosis and preserving nasal valves. PMID:22359729

  6. Accuracy of implant surgery with surgical guide by inexperienced clinicians: an in vitro study

    PubMed Central

    Tanaka, Hideaki; Sasaki, Masanori; Ichimaru, Eiji; Naito, Yasushi; Matsushita, Yasuyuki; Koyano, Kiyoshi; Nakamura, Seiji

    2015-01-01

    Abstract Implant surgery with surgical guide has been introduced with a concept of position improvement. The surgery might be considered as easy even for inexperienced clinician because of step simplicity. However, there are residual risks, resulting in postoperative complications. The aim of this study was to assess the accuracy of implant surgery with surgical guide by inexperienced clinicians in in vitro. After preoperative computed tomographies (CTs) of five artificial models of unilateral free‐end edentulism with scan templates, five surgical guides were established from templates. Following virtual planning, 10 implants were placed in the 45 and 47 regions by five residents without any placement experiences. All drillings and placements were performed using surgical guides. After postoperative CTs, inaccurate verifications between virtual and actual positions of implants were carried out, by overlaying of pre/postoperative CT data. The angle displacement of implant axis in the 47 region was significantly larger than that in the 45 region (P = 0.031). The 3D offset of implant base in the 47 region was significantly larger than that in the 45 region (P = 0.002). For distal/apical directions, displacements of base in the 47 region were significantly larger than those in the 45 region (P = 0.004 and P = 0.003, respectively). The 3D offset of implant tip in the 47 region was significantly larger than that in the 45 region (P = 0.003). For distal/apical directions, displacements of tip in the 47 region were significantly larger than those in the 45 region (P = 0.002 and P = 0.003, respectively). Within limitations of this in vitro study, data for accuracy of implant surgery with surgical guide would be informative for further studies, because in vitro studies should be substantially made to avoid unnecessary burden of patients, in advance of retro/prospective studies. A comparison of the accuracy in this in vitro model between by inexperienced and well‐experienced operators should be necessary for clinicians intending to use surgical guide for placement. PMID:29744135

  7. The utility of low-density genotyping for imputation in the Thoroughbred horse

    PubMed Central

    2014-01-01

    Background Despite the dramatic reduction in the cost of high-density genotyping that has occurred over the last decade, it remains one of the limiting factors for obtaining the large datasets required for genomic studies of disease in the horse. In this study, we investigated the potential for low-density genotyping and subsequent imputation to address this problem. Results Using the haplotype phasing and imputation program, BEAGLE, it is possible to impute genotypes from low- to high-density (50K) in the Thoroughbred horse with reasonable to high accuracy. Analysis of the sources of variation in imputation accuracy revealed dependence both on the minor allele frequency of the single nucleotide polymorphisms (SNPs) being imputed and on the underlying linkage disequilibrium structure. Whereas equidistant spacing of the SNPs on the low-density panel worked well, optimising SNP selection to increase their minor allele frequency was advantageous, even when the panel was subsequently used in a population of different geographical origin. Replacing base pair position with linkage disequilibrium map distance reduced the variation in imputation accuracy across SNPs. Whereas a 1K SNP panel was generally sufficient to ensure that more than 80% of genotypes were correctly imputed, other studies suggest that a 2K to 3K panel is more efficient to minimize the subsequent loss of accuracy in genomic prediction analyses. The relationship between accuracy and genotyping costs for the different low-density panels, suggests that a 2K SNP panel would represent good value for money. Conclusions Low-density genotyping with a 2K SNP panel followed by imputation provides a compromise between cost and accuracy that could promote more widespread genotyping, and hence the use of genomic information in horses. In addition to offering a low cost alternative to high-density genotyping, imputation provides a means to combine datasets from different genotyping platforms, which is becoming necessary since researchers are starting to use the recently developed equine 70K SNP chip. However, more work is needed to evaluate the impact of between-breed differences on imputation accuracy. PMID:24495673

  8. Design simplicity influences patient portal use: the role of aesthetic evaluations for technology acceptance

    PubMed Central

    Watkins, Ivan; Mackert, Michael S; Xie, Bo; Stephens, Keri K; Shalev, Heidi

    2016-01-01

    Objective This study focused on patient portal use and investigated whether aesthetic evaluations of patient portals function are antecedent variables to variables in the Technology Acceptance Model. Methods A cross-sectional survey of current patient portals users (N = 333) was conducted online. Participants completed the Visual Aesthetics of Website Inventory, along with items measuring perceived ease of use (PEU), perceived usefulness (PU), and behavioral intentions (BIs) to use the patient portal. Results The hypothesized model accounted for 29% of the variance in BIs to use the portal, 46% of the variance in the PU of the portal, and 29% of the variance in the portal’s PEU. Additionally, one dimension of the aesthetic evaluations functions as a predictor in the model – simplicity evaluations had a significant positive effect on PEU. Conclusion This study provides evidence that aesthetic evaluations – specifically regarding simplicity – function as a significant antecedent variable to patients’ use of patient portals and should influence patient portal design strategies. PMID:26635314

  9. The practical use of simplicity in developing ground water models

    USGS Publications Warehouse

    Hill, M.C.

    2006-01-01

    The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.

  10. Definition of (so MIScalled) ''Complexity'' as UTTER-SIMPLICITY!!! Versus Deviations From it as Complicatedness-Measure

    NASA Astrophysics Data System (ADS)

    Young, F.; Siegel, Edward Carl-Ludwig

    2011-03-01

    (so MIScalled) "complexity" with INHERENT BOTH SCALE-Invariance Symmetry-RESTORING, AND 1 / w (1.000..) "pink" Zipf-law Archimedes-HYPERBOLICITY INEVITABILITY power-spectrum power-law decay algebraicity. Their CONNECTION is via simple-calculus SCALE-Invariance Symmetry-RESTORING logarithm-function derivative: (d/ d ω) ln(ω) = 1 / ω , i.e. (d/ d ω) [SCALE-Invariance Symmetry-RESTORING](ω) = 1/ ω . Via Noether-theorem continuous-symmetries relation to conservation-laws: (d/ d ω) [inter-scale 4-current 4-div-ergence} = 0](ω) = 1 / ω . Hence (so MIScalled) "complexity" is information inter-scale conservation, in agreement with Anderson-Mandell [Fractals of Brain/Mind, G. Stamov ed.(1994)] experimental-psychology!!!], i.e. (so MIScalled) "complexity" is UTTER-SIMPLICITY!!! Versus COMPLICATEDNESS either PLUS (Additive) VS. TIMES (Multiplicative) COMPLICATIONS of various system-specifics. COMPLICATEDNESS-MEASURE DEVIATIONS FROM complexity's UTTER-SIMPLICITY!!!: EITHER [SCALE-Invariance Symmetry-BREAKING] MINUS [SCALE-Invariance Symmetry-RESTORING] via power-spectrum power-law algebraicity decays DIFFERENCES: ["red"-Pareto] MINUS ["pink"-Zipf Archimedes-HYPERBOLICITY INEVITABILITY]!!!

  11. Optical 3D methods for measurement of prosthetic wear of total hip arthroplasty: principles, verification and results.

    PubMed

    Rossler, Tomas; Mandat, Dusan; Gallo, Jiri; Hrabovsky, Miroslav; Pochmon, Michal; Havranek, Vitezslav

    2009-07-20

    Total hip arthroplasty (THA) significantly improves the quality of life in majority of patients with severe osteoarthritis. However, long-term outcomes of THAs are compromised by aseptic loosening and periprosthetic osteolysis which needs revision surgery. Both of these are causally linked to a prosthetic wear deliberated from the prosthetic articulating surfaces. As a result, there is a need to measure the mode and magnitude of wear. The paper evaluates three optical methods proposed for construction of a device for the non-contact prosthetic wear measurement. Of them, the scanning profilometry achieved promising combination of accuracy and repeatability. Simultaneously, it is time efficient to enable the development of a sensor for wear measurement.

  12. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  13. Measuring sensitivity to viewpoint change with and without stereoscopic cues.

    PubMed

    Bell, Jason; Dickinson, Edwin; Badcock, David R; Kingdom, Frederick A A

    2013-12-04

    The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.

  14. ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction

    PubMed Central

    Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo

    2015-01-01

    By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302

  15. 38 CFR 1.931 - Bases for compromise.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Bases for compromise. 1.931 Section 1.931 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Standards for Compromise of Claims § 1.931 Bases for compromise. (a) VA may compromise a debt if...

  16. 38 CFR 1.931 - Bases for compromise.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Bases for compromise. 1.931 Section 1.931 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Standards for Compromise of Claims § 1.931 Bases for compromise. (a) VA may compromise a debt if...

  17. 38 CFR 1.931 - Bases for compromise.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Bases for compromise. 1.931 Section 1.931 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Standards for Compromise of Claims § 1.931 Bases for compromise. (a) VA may compromise a debt if...

  18. 38 CFR 1.931 - Bases for compromise.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Bases for compromise. 1.931 Section 1.931 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Standards for Compromise of Claims § 1.931 Bases for compromise. (a) VA may compromise a debt if...

  19. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  20. Accuracy of rapid radiographic film calibration for intensity‐modulated radiation therapy verification

    PubMed Central

    Kulasekere, Ravi; Moran, Jean M.; Fraass, Benedick A.; Roberson, Peter L.

    2006-01-01

    A single calibration film method was evaluated for use with intensity‐modulated radiation therapy film quality assurance measurements. The single‐film method has the potential advantages of exposure simplicity, less media consumption, and improved processor quality control. Potential disadvantages include cross contamination of film exposure, implementation effort to document delivered dose, and added complication of film response analysis. Film response differences were measured between standard and single‐film calibration methods. Additional measurements were performed to help trace causes for the observed discrepancies. Kodak X‐OmatV (XV) film was found to have greater response variability than extended dose range (EDR) film. We found it advisable for XV film to relate the film response calibration for the single‐film method to a user‐defined optimal calibration geometry. Using a single calibration film exposed at the time of experiment, the total uncertainty of film response was estimated to be <2% (1%) for XV (EDR) film at 50 (100) cGy and higher, respectively. PACS numbers: 87.53.‐j, 87.53.Dq PMID:17533325

  1. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  2. Intermediate-sized natural gas fueled carbonate fuel cell power plants

    NASA Astrophysics Data System (ADS)

    Sudhoff, Frederick A.; Fleming, Donald K.

    1994-04-01

    This executive summary of the report describes the accomplishments of the joint US Department of Energy's (DOE) Morgantown Energy Technology Center (METC) and M-C POWER Corporation's Cooperative Research and Development Agreement (CRADA) No. 93-013. This study addresses the intermediate power plant size between 2 megawatt (MW) and 200 MW. A 25 MW natural-gas, fueled-carbonate fuel cell power plant was chosen for this purpose. In keeping with recent designs, the fuel cell will operate under approximately three atmospheres of pressure. An expander/alternator is utilized to expand exhaust gas to atmospheric conditions and generate additional power. A steam-bottoming cycle is not included in this study because it is not believed to be cost effective for this system size. This study also addresses the simplicity and accuracy of a spreadsheet-based simulation with that of a full Advanced System for Process Engineering (ASPEN) simulation. The personal computer can fully utilize the simple spreadsheet model simulation. This model can be made available to all users and is particularly advantageous to the small business user.

  3. Scene-based nonuniformity correction for airborne point target detection systems.

    PubMed

    Zhou, Dabiao; Wang, Dejiang; Huo, Lijun; Liu, Rang; Jia, Ping

    2017-06-26

    Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.

  4. Orbital Advection with Magnetohydrodynamics and Vector Potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyra, Wladimir; McNally, Colin P.; Heinemann, Tobias

    Orbital advection is a significant bottleneck in disk simulations, and a particularly tricky one when used in connection with magnetohydrodynamics. We have developed an orbital advection algorithm suitable for the induction equation with magnetic potential. The electromotive force is split into advection and shear terms, and we find that we do not need an advective gauge since solving the orbital advection implicitly precludes the shear term from canceling the advection term. We prove and demonstrate the third order in time accuracy of the scheme. The algorithm is also suited to non-magnetic problems. Benchmarked results of (hydrodynamical) planet–disk interaction and ofmore » the magnetorotational instability are reproduced. We include detailed descriptions of the construction and selection of stabilizing dissipations (or high-frequency filters) needed to generate practical results. The scheme is self-consistent, accurate, and elegant in its simplicity, making it particularly efficient for straightforward finite-difference methods. As a result of the work, the algorithm is incorporated in the public version of the Pencil Code, where it can be used by the community.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, J.C.; Shin, W.K.; Choi, C.Y.

    Transient heat transfer problems with phase changes (Stefan problems) occur in many engineering situations, including potential core melting and solidification during pressurized-water-reactor severe accidents, ablation of thermal shields, melting and solidification of alloys, and many others. This article addresses the numerical analysis of nonlinear transient heat transfer with melting or solidification. An effective and simple procedure is presented for the simulation of the motion of the boundary and the transient temperature field during the phase change process. To accomplish this purpose, an iterative implicit solution algorithm has been developed by employing the dual-reciprocity boundary-element method. The dual-reciprocity boundary-element approach providedmore » in this article is much simpler than the usual boundary-element method in applying a reciprocity principle and an available technique for dealing with the domain integral of the boundary element formulation simultaneously. In this article, attention is focused on two-dimensional melting (ablation)/solidification problems for simplicity. The accuracy and effectiveness of the present analysis method have been illustrated through comparisons of the calculation results of some examples of one-phase ablation/solidification problems with their known semianalytical or numerical solutions where available.« less

  6. Short-term stability improvements of an optical frequency standard based on free Ca atoms

    NASA Astrophysics Data System (ADS)

    Sherman, Jeff; Oates, Chris

    2010-03-01

    Compared to optical frequency standards featuring trapped ions or atoms in optical lattices, the strength of a standard using freely expanding neutral calcium atoms is not ultimate accuracy but rather short-term stability and experimental simplicity. Recently, a fractional frequency instability of 4 x10-15 at 1 second was demonstrated for the Ca standard at 657 nm [1]. The short cycle time (˜2 ms) combined with only a moderate interrogation duty cycle (˜15 %) is thought to introduce excess, and potentially critically limiting technical noise due to the Dick effect---high-frequency noise on the laser oscillator is not averaged away but is instead down-sampled by aliasing. We will present results of two strategies employed to minimize this effect: the reduction of clock laser noise by filtering the master clock oscillator through a high-finesse optical cavity [2], and an optimization of the interrogation cycle to match our laser's noise spectrum.[4pt] [1] Oates et al., Optics Letters, 25(21), 1603--5 (2000)[0pt] [2] Nazarova et al., J. Opt. Soc. Am. B, 5(10), 1632--8 (2008)

  7. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: Derivative spectrophotometry versus wavelet transform

    NASA Astrophysics Data System (ADS)

    Salem, Hesham; Lotfy, Hayam M.; Hassan, Nagiba Y.; El-Zeiny, Mohamed B.; Saleh, Sarah S.

    2015-01-01

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  8. [Simultaneous determination of delta13C values of glycerol and ethanol in wine by liquid chromatography coupled with isotope ratio mass spectrometry].

    PubMed

    Li, Xuemin; Jia, Guangqun; Cao, Yanzhong; Zhang, Jinjie; Wang, Lei; Sun, Huiyuan

    2013-12-01

    A novel procedure was established for the characterization of delta13C values of glycerol and ethanol in wine by liquid chromatography-isotope ratio mass spectrometry (LC-IRMS). Several parameters influencing the separation of glycerol and ethanol from wine matrix were optimized. The precision and accuracy of the proposed method were 0.15 per thousand to 0.26 per thousand and 0.11 per thousand to 0.28 per thousand, respectively. The results obtained for 40 wine samples displayed that the delta13C value of glycerol ranged from--26.87 per thousand to--32.96 per thousand and that of ethanol ranged from--24.06 per thousand to--28.29 per thousand. Close correlations (R = 0.82) were obtained between the delta13C values of glycerol and ethanol. The proposed method didn't need complex sample treatment, and the delta13C values of glycerol and ethanol in wine can be simultaneously determined, thus improving the method in terms of simplicity and speed compared with traditional methods.

  9. Analytical description of concentration dependence of surface tension in multicomponent systems

    NASA Astrophysics Data System (ADS)

    R, Dadashev; R, Kutuev; D, Elimkhanov

    2008-02-01

    From the basic fundamental thermodynamic expressions the equation of isotherms of the surface tension of a ternary system is received. Various assumptions concerning the concentration dependence of molar areas are usually made when the equation is derived. The dependence of the molar areas is calculated as an additive function of the structure of a volumetric phase or the structure of a surface layer. To define the concentration dependence of the molar areas we used a stricter thermodynamic expression offered by Butler. In the received equation the dependence of molar areas on the structure of the solution is taken into account. Therefore, the equation can be applied for the calculation of surface tension over a wide concentration range of the components. Unlike the known expressions, the equation includes the surface tension properties of lateral binary systems, which makes the accuracy of the calculated values considerably higher. Thus, among the advantages of the offered equation we can point out the mathematical simplicity of the received equation and the fact that the equation includes physical parameters the experimental definition of which does not present any special difficulties.

  10. Quantitative elemental imaging of heterogeneous catalysts using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Trichard, F.; Sorbier, L.; Moncayo, S.; Blouët, Y.; Lienemann, C.-P.; Motto-Ros, V.

    2017-07-01

    Currently, the use of catalysis is widespread in almost all industrial processes; its use improves productivity, synthesis yields and waste treatment as well as decreases energy costs. The increasingly stringent requirements, in terms of reaction selectivity and environmental standards, impose progressively increasing accuracy and control of operations. Meanwhile, the development of characterization techniques has been challenging, and the techniques often require equipment with high complexity. In this paper, we demonstrate a novel elemental approach for performing quantitative space-resolved analysis with ppm-scale quantification limits and μm-scale resolution. This approach, based on laser-induced breakdown spectroscopy (LIBS), is distinguished by its simplicity, all-optical design, and speed of operation. This work analyzes palladium-based porous alumina catalysts, which are commonly used in the selective hydrogenation process, using the LIBS method. We report an exhaustive study of the quantification capability of LIBS and its ability to perform imaging measurements over a large dynamic range, typically from a few ppm to wt%. These results offer new insight into the use of LIBS-based imaging in the industry and paves the way for innumerable applications.

  11. Geographic Gossip: Efficient Averaging for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  12. All-fiber wavelength-tunable picosecond nonlinear reflectivity measurement setup for characterization of semiconductor saturable absorber mirrors

    NASA Astrophysics Data System (ADS)

    Viskontas, K.; Rusteika, N.

    2016-09-01

    Semiconductor saturable absorber mirror (SESAM) is the key component for many passively mode-locked ultrafast laser sources. Particular set of nonlinear parameters is required to achieve self-starting mode-locking or avoid undesirable q-switch mode-locking for the ultra-short pulse laser. In this paper, we introduce a novel all-fiber wavelength-tunable picosecond pulse duration setup for the measurement of nonlinear properties of saturable absorber mirrors at around 1 μm center wavelength. The main advantage of an all-fiber configuration is the simplicity of measuring the fiber-integrated or fiber-pigtailed saturable absorbers. A tunable picosecond fiber laser enables to investigate the nonlinear parameters at different wavelengths in ultrafast regime. To verify the capability of the setup, nonlinear parameters for different SESAMs with low and high modulation depth were measured. In the operating wavelength range 1020-1074 nm, <1% absolute nonlinear reflectivity accuracy was demonstrated. Achieved fluence range was from 100 nJ/cm2 to 2 mJ/cm2 with corresponding intensity from 10 kW/cm2 to 300 MW/cm2.

  13. Investigation of methods for estimating hand bone dimensions using X-ray hand anthropometric data.

    PubMed

    Kong, Yong-Ku; Freivalds, Andris; Kim, Dae-Min; Chang, Joonho

    2017-06-01

    This study examined two conversion methods, M1 and M2, to predict finger/phalange bone lengths based on finger/phalange surface lengths. Forty-one Korean college students (25 males and 16 females) were recruited and their finger/phalange surface lengths, bone lengths and grip strengths were measured using a vernier caliper, an X-ray generator and a double-handle force measurement system, respectively. M1 and M2 were defined as formulas able to estimate finger/phalange bone lengths based on one dimension (i.e., surface hand length) and four finger dimensions (surface finger lengths), respectively. As a result of conversion, the estimation errors by M1 presented mean 1.22 mm, which was smaller than those (1.29 mm) by M2. The bone lengths estimated by M1 (mean r = 0.81) presented higher correlations with the measured bone lengths than those estimated by M2 (0.79). Thus, the M1 method was recommended in the present study, based on conversion simplicity and accuracy.

  14. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    PubMed

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  15. TACD: a transportable ant colony discrimination model for corporate bankruptcy prediction

    NASA Astrophysics Data System (ADS)

    Lalbakhsh, Pooia; Chen, Yi-Ping Phoebe

    2017-05-01

    This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy.

  16. Solutions to Kuessner's integral equation in unsteady flow using local basis functions

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Halstead, D. W.

    1975-01-01

    The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.

  17. Numerical prediction on the dispersion of pollutant particles

    NASA Astrophysics Data System (ADS)

    Osman, Kahar; Ali, Zairi; Ubaidullah, S.; Zahid, M. N.

    2012-06-01

    The increasing concern on air pollution has led people around the world to find more efficient ways to control the problem. Air dispersion modeling is proven to be one of the alternatives that provide economical ways to control the growing threat of air pollution. The objective of this research is to develop a practical numerical algorithm to predict the dispersion of pollutant particles around a specific source of emission. The source selected was a rubber wood manufacturing plant. Gaussian-plume model were used as air dispersion model due to its simplicity and generic application. Results of this study show the concentrations of the pollutant particles on ground level reached approximately 90μg/m3, compared with other software. This value surpasses the limit of 50μg/m3 stipulated by the National Ambient Air Quality Standard (NAAQS) and Recommended Malaysian Guidelines (RMG) set by Environment Department of Malaysia. The results also show high concentration of pollutant particles reading during dru seasons as compared to that of rainy seasons. In general, the developed algorithm is proven to be able to predict particles distribution around emitted source with acceptable accuracy.

  18. Entering the Pantheon of 21st Century Molecular Biology Tools: A Perspective on Digital PCR.

    PubMed

    Karlin-Neumann, George; Bizouarn, Francisco

    2018-01-01

    After several decades of relatively modest use, in the last several years digital PCR (dPCR) has grown to become the new gold standard for nucleic acid quantification. This coincides with the commercial availability of scalable, affordable, and reproducible droplet-based dPCR platforms in the past five years and has led to its rapid dissemination into diverse research fields and testing applications. Among these, it has been adopted most vigorously into clinical oncology where it is beginning to be used for plasma genotyping in cancer patients undergoing treatment. Additionally, innovation across the scientific community has extended the benefits of reaction partitioning beyond DNA and RNA quantification alone, and demonstrated its usefulness in evaluating DNA size and integrity, the physical linkage of colocalized markers, levels of enzyme activity and specific cation concentrations in a sample, and more. As dPCR technology gains in popularity and breadth, its power and simplicity can often be taken for granted; thus, the reader is reminded that due diligence must be exercised in order to make claims not only of precision but also of accuracy in their measurements.

  19. The transmission of sound in nonuniform ducts. [carrying steady, compressible flow

    NASA Technical Reports Server (NTRS)

    Eversman, W.

    1975-01-01

    The method of weighted residuals in the form of a modified Galerkin method with boundary residuals was developed for the study of the transmission of sound in nonuniform ducts carrying a steady, compressible flow. In this development, the steady flow was modeled as essentially one dimensional but with a kinematic modification to force tangency of the flow at the duct walls. Three forms of the computational scheme were developed using for basis functions (1) the no-flow uniform duct modes, (2) positive running uniform duct modes, with flow, and (3) positive and negative running uniform duct modes, with flow. The formulation using the no-flow modes was the most highly developed, and has advantages primarily due to relative computational simplicity. Results using the three methods are shown to converge to known solutions for several special cases, and the most significant check case is against low frequency, one dimensional results over the complete subsonic Mach number range. Development of the method is continuing, with emphasis on assessing the relative accuracy and efficiency of the three implementations.

  20. Contact angle measurement with a smartphone

    NASA Astrophysics Data System (ADS)

    Chen, H.; Muros-Cobos, Jesus L.; Amirfazli, A.

    2018-03-01

    In this study, a smartphone-based contact angle measurement instrument was developed. Compared with the traditional measurement instruments, this instrument has the advantage of simplicity, compact size, and portability. An automatic contact point detection algorithm was developed to allow the instrument to correctly detect the drop contact points. Two different contact angle calculation methods, Young-Laplace and polynomial fitting methods, were implemented in this instrument. The performance of this instrument was tested first with ideal synthetic drop profiles. It was shown that the accuracy of the new system with ideal synthetic drop profiles can reach 0.01% with both Young-Laplace and polynomial fitting methods. Conducting experiments to measure both static and dynamic (advancing and receding) contact angles with the developed instrument, we found that the smartphone-based instrument can provide accurate and practical measurement results as the traditional commercial instruments. The successful demonstration of use of a smartphone (mobile phone) to conduct contact angle measurement is a significant advancement in the field as it breaks the dominate mold of use of a computer and a bench bound setup for such systems since their appearance in 1980s.

  1. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  2. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  3. Contact angle measurement with a smartphone.

    PubMed

    Chen, H; Muros-Cobos, Jesus L; Amirfazli, A

    2018-03-01

    In this study, a smartphone-based contact angle measurement instrument was developed. Compared with the traditional measurement instruments, this instrument has the advantage of simplicity, compact size, and portability. An automatic contact point detection algorithm was developed to allow the instrument to correctly detect the drop contact points. Two different contact angle calculation methods, Young-Laplace and polynomial fitting methods, were implemented in this instrument. The performance of this instrument was tested first with ideal synthetic drop profiles. It was shown that the accuracy of the new system with ideal synthetic drop profiles can reach 0.01% with both Young-Laplace and polynomial fitting methods. Conducting experiments to measure both static and dynamic (advancing and receding) contact angles with the developed instrument, we found that the smartphone-based instrument can provide accurate and practical measurement results as the traditional commercial instruments. The successful demonstration of use of a smartphone (mobile phone) to conduct contact angle measurement is a significant advancement in the field as it breaks the dominate mold of use of a computer and a bench bound setup for such systems since their appearance in 1980s.

  4. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  5. Circular motion geometry using minimal data.

    PubMed

    Jiang, Guang; Quan, Long; Tsui, Hung-Tat

    2004-06-01

    Circular motion or single axis motion is widely used in computer vision and graphics for 3D model acquisition. This paper describes a new and simple method for recovering the geometry of uncalibrated circular motion from a minimal set of only two points in four images. This problem has been previously solved using nonminimal data either by computing the fundamental matrix and trifocal tensor in three images or by fitting conics to tracked points in five or more images. It is first established that two sets of tracked points in different images under circular motion for two distinct space points are related by a homography. Then, we compute a plane homography from a minimal two points in four images. After that, we show that the unique pair of complex conjugate eigenvectors of this homography are the image of the circular points of the parallel planes of the circular motion. Subsequently, all other motion and structure parameters are computed from this homography in a straighforward manner. The experiments on real image sequences demonstrate the simplicity, accuracy, and robustness of the new method.

  6. Frequency accurate coherent electro-optic dual-comb spectroscopy in real-time.

    PubMed

    Martín-Mateos, Pedro; Jerez, Borja; Largo-Izquierdo, Pedro; Acedo, Pablo

    2018-04-16

    Electro-optic dual-comb spectrometers have proved to be a promising technology for sensitive, high-resolution and rapid spectral measurements. Electro-optic combs possess very attractive features like simplicity, reliability, bright optical teeth, and typically moderate but quickly tunable optical spans. Furthermore, in a dual-comb arrangement, narrowband electro-optic combs are generated with a level of mutual coherence that is sufficiently high to enable optical multiheterodyning without inter-comb stabilization or signal processing systems. However, this valuable tool still presents several limitations; for instance, on most systems, absolute frequency accuracy and long-term stability cannot be guaranteed; likewise, interferometer-induced phase noise restricts coherence time and limits the attainable signal-to-noise ratio. In this paper, we address these drawbacks and demonstrate a cost-efficient absolute electro-optic dual-comb instrument based on a frequency stabilization mechanism and a novel adaptive interferogram acquisition approach devised for electro-optic dual-combs capable of operating in real-time. The spectrometer, completely built from commercial components, provides sub-ppm frequency uncertainties and enables a signal-to-noise ratio of 10000 (intensity noise) in 30 seconds of integration time.

  7. Online anomaly detection in wireless body area networks for reliable healthcare monitoring.

    PubMed

    Salem, Osman; Liu, Yaning; Mehaoua, Ahmed; Boutaba, Raouf

    2014-09-01

    In this paper, we propose a lightweight approach for online detection of faulty measurements by analyzing the data collected from medical wireless body area networks. The proposed framework performs sequential data analysis using a smart phone as a base station, and takes into account the constrained resources of the smart phone, such as processing power and storage capacity. The main objective is to raise alarms only when patients enter in an emergency situation, and to discard false alarms triggered by faulty measurements or ill-behaved sensors. The proposed approach is based on the Haar wavelet decomposition, nonseasonal Holt-Winters forecasting, and the Hampel filter for spatial analysis, and on for temporal analysis. Our objective is to reduce false alarms resulting from unreliable measurements and to reduce unnecessary healthcare intervention. We apply our proposed approach on real physiological dataset. Our experimental results prove the effectiveness of our approach in achieving good detection accuracy with a low false alarm rate. The simplicity and the processing speed of our proposed framework make it useful and efficient for real time diagnosis.

  8. Harmonic Allocation of Authorship Credit: Source-Level Correction of Bibliometric Bias Assures Accurate Publication and Citation Analysis

    PubMed Central

    Hagen, Nils T.

    2008-01-01

    Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201

  9. Fake currency detection using image processing

    NASA Astrophysics Data System (ADS)

    Agasti, Tushar; Burand, Gajanan; Wade, Pratik; Chitra, P.

    2017-11-01

    The advancement of color printing technology has increased the rate of fake currency note printing and duplicating the notes on a very large scale. Few years back, the printing could be done in a print house, but now anyone can print a currency note with maximum accuracy using a simple laser printer. As a result the issue of fake notes instead of the genuine ones has been increased very largely. India has been unfortunately cursed with the problems like corruption and black money. And counterfeit of currency notes is also a big problem to it. This leads to design of a system that detects the fake currency note in a less time and in a more efficient manner. The proposed system gives an approach to verify the Indian currency notes. Verification of currency note is done by the concepts of image processing. This article describes extraction of various features of Indian currency notes. MATLAB software is used to extract the features of the note. The proposed system has got advantages like simplicity and high performance speed. The result will predict whether the currency note is fake or not.

  10. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: derivative spectrophotometry versus wavelet transform.

    PubMed

    Salem, Hesham; Lotfy, Hayam M; Hassan, Nagiba Y; El-Zeiny, Mohamed B; Saleh, Sarah S

    2015-01-25

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Trends in extreme learning machines: a review.

    PubMed

    Huang, Gao; Huang, Guang-Bin; Song, Shiji; You, Keyou

    2015-01-01

    Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.

  12. low-Cost, High-Performance Alternatives for Target Temperature Monitoring Using the Near-Infrared Spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virgo, Mathew; Quigley, Kevin J.; Chemerisov, Sergey

    A process is being developed for commercial production of the medical isotope Mo-99 through a photo-nuclear reaction on a Mo-100 target using a highpower electron accelerator. This process requires temperature monitoring of the window through which a high-current electron beam is transmitted to the target. For this purpose, we evaluated two near infrared technologies: the OMEGA Engineering iR2 pyrometer and the Ocean Optics Maya2000 spectrometer with infrared-enhanced charge-coupled device (CCD) sensor. Measuring in the near infrared spectrum, in contrast to the long-wavelength infrared spectrum, offers a few immediate advantages: (1) ordinary glass or quartz optical elements can be used; (2)more » alignment can be performed without heating the target; and (3) emissivity corrections to temperature are typically less than 10%. If spatial resolution is not required, the infrared pyrometer is attractive because of its accuracy, low cost, and simplicity. If spatial resolution is required, we make recommendations for near-infrared imaging based on our data augmented by calculations« less

  13. Modeling an alkaline electrolysis cell through reduced-order and loss-estimate approaches

    NASA Astrophysics Data System (ADS)

    Milewski, Jaroslaw; Guandalini, Giulio; Campanari, Stefano

    2014-12-01

    The paper presents two approaches to the mathematical modeling of an Alkaline Electrolyzer Cell. The presented models were compared and validated against available experimental results taken from a laboratory test and against literature data. The first modeling approach is based on the analysis of estimated losses due to the different phenomena occurring inside the electrolytic cell, and requires careful calibration of several specific parameters (e.g. those related to the electrochemical behavior of the electrodes) some of which could be hard to define. An alternative approach is based on a reduced-order equivalent circuit, resulting in only two fitting parameters (electrodes specific resistance and parasitic losses) and calculation of the internal electric resistance of the electrolyte. Both models yield satisfactory results with an average error limited below 3% vs. the considered experimental data and show the capability to describe with sufficient accuracy the different operating conditions of the electrolyzer; the reduced-order model could be preferred thanks to its simplicity for implementation within plant simulation tools dealing with complex systems, such as electrolyzers coupled with storage facilities and intermittent renewable energy sources.

  14. A mixed-mode crack analysis of rectilinear anisotropic solids using conservation laws of elasticity

    NASA Technical Reports Server (NTRS)

    Wang, S. S.; Yau, J. F.; Corten, H. T.

    1980-01-01

    A very simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems in rectilinear anisotropic solids is presented. The analysis is formulated on the basis of conservation laws of anisotropic elasticity and of fundamental relationships in anisotropic fracture mechanics. The problem is reduced to a system of linear algebraic equations in mixed-mode stress intensity factors. One of the salient features of the present approach is that it can determine directly the mixed-mode stress intensity solutions from the conservation integrals evaluated along a path removed from the crack-tip region without the need of solving the corresponding complex near-field boundary value problem. Several examples with solutions available in the literature are solved to ensure the accuracy of the current analysis. This method is further demonstrated to be superior to other approaches in its numerical simplicity and computational efficiency. Solutions of more complicated and practical engineering problems dealing with the crack emanating from a circular hole in composites are presented also to illustrate the capacity of this method.

  15. Motion-compensated speckle tracking via particle filtering

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu

    2015-07-01

    Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.

  16. Optimization and Validation of a Sensitive Method for HPLC-PDA Simultaneous Determination of Torasemide and Spironolactone in Human Plasma using Central Composite Design.

    PubMed

    Subramanian, Venkatesan; Nagappan, Kannappan; Sandeep Mannemala, Sai

    2015-01-01

    A sensitive, accurate, precise and rapid HPLC-PDA method was developed and validated for the simultaneous determination of torasemide and spironolactone in human plasma using Design of experiments. Central composite design was used to optimize the method using content of acetonitrile, concentration of buffer and pH of mobile phase as independent variables, while the retention factor of spironolactone, resolution between torasemide and phenobarbitone; and retention time of phenobarbitone were chosen as dependent variables. The chromatographic separation was achieved on Phenomenex C(18) column and the mobile phase comprising 20 mM potassium dihydrogen ortho phosphate buffer (pH-3.2) and acetonitrile in 82.5:17.5 v/v pumped at a flow rate of 1.0 mL min(-1). The method was validated according to USFDA guidelines in terms of selectivity, linearity, accuracy, precision, recovery and stability. The limit of quantitation values were 80 and 50 ng mL(-1) for torasemide and spironolactone respectively. Furthermore, the sensitivity and simplicity of the method suggests the validity of method for routine clinical studies.

  17. Simple and rapid quantification of brominated vegetable oil in commercial soft drinks by LC–MS

    PubMed Central

    Chitranshi, Priyanka; da Costa, Gonçalo Gamboa

    2016-01-01

    We report here a simple and rapid method for the quantification of brominated vegetable oil (BVO) in soft drinks based upon liquid chromatography–electrospray ionization mass spectrometry. Unlike previously reported methods, this novel method does not require hydrolysis, extraction or derivatization steps, but rather a simple “dilute and shoot” sample preparation. The quantification is conducted by mass spectrometry in selected ion recording mode and a single point standard addition procedure. The method was validated in the range of 5–25 μg/mL BVO, encompassing the legal limit of 15 μg/mL established by the US FDA for fruit-flavored beverages in the US market. The method was characterized by excellent intra- and inter-assay accuracy (97.3–103.4%) and very low imprecision [0.5–3.6% (RSD)]. The direct nature of the quantification, simplicity, and excellent statistical performance of this methodology constitute clear advantages in relation to previously published methods for the analysis of BVO in soft drinks. PMID:27451219

  18. Radar-based dynamic testing of the cable-suspended bridge crossing the Ebro River at Amposta, Spain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Carmelo; Luzi, Guido

    2014-05-27

    Microwave remote sensing is the most recent experimental methodology suitable to the non-contact measurement of deflections on large structures, in static or dynamic conditions. After a brief description of the radar measurement system, the paper addresses the application of microwave remote sensing to ambient vibration testing of a cable-suspended bridge. The investigated bridge crosses the Ebro River at Amposta, Spain and consists of two steel stiffening trusses and a series of equally spaced steel floor beams; the main span is supported by inclined stay cables and two series of 8 suspension cables. The dynamic tests were performed in operational conditions,more » with the sensor being placed in two different positions so that the response of both the steel deck and the arrays of suspension elements was measured. The experimental investigation confirms the simplicity of use of the radar and the accuracy of the results provided by the microwave remote sensing as well as the issues often met in the clear localization of measurement points.« less

  19. [Development of quality assurance/quality control web system in radiotherapy].

    PubMed

    Okamoto, Hiroyuki; Mochizuki, Toshihiko; Yokoyama, Kazutoshi; Wakita, Akihisa; Nakamura, Satoshi; Ueki, Heihachi; Shiozawa, Keiko; Sasaki, Koji; Fuse, Masashi; Abe, Yoshihisa; Itami, Jun

    2013-12-01

    Our purpose is to develop a QA/QC (quality assurance/quality control) web system using a server-side script language such as HTML (HyperText Markup Language) and PHP (Hypertext Preprocessor), which can be useful as a tool to share information about QA/QC in radiotherapy. The system proposed in this study can be easily built in one's own institute, because HTML can be easily handled. There are two desired functions in a QA/QC web system: (i) To review the results of QA/QC for a radiotherapy machine, manuals, and reports necessary for routinely performing radiotherapy through this system. By disclosing the results, transparency can be maintained, (ii) To reveal a protocol for QA/QC in one's own institute using pictures and movies relating to QA/QC for simplicity's sake, which can also be used as an educational tool for junior radiation technologists and medical physicists. By using this system, not only administrators, but also all staff involved in radiotherapy, can obtain information about the conditions and accuracy of treatment machines through the QA/QC web system.

  20. 10 CFR 1015.302 - Bases for compromise.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Bases for compromise. 1015.302 Section 1015.302 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COLLECTION OF CLAIMS OWED THE UNITED STATES Standards for the Compromise of Claims § 1015.302 Bases for compromise. (a) DOE may compromise a debt if the Government cannot...

  1. 10 CFR 1015.302 - Bases for compromise.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Bases for compromise. 1015.302 Section 1015.302 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COLLECTION OF CLAIMS OWED THE UNITED STATES Standards for the Compromise of Claims § 1015.302 Bases for compromise. (a) DOE may compromise a debt if the Government cannot...

  2. 10 CFR 1015.302 - Bases for compromise.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Bases for compromise. 1015.302 Section 1015.302 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COLLECTION OF CLAIMS OWED THE UNITED STATES Standards for the Compromise of Claims § 1015.302 Bases for compromise. (a) DOE may compromise a debt if the Government cannot...

  3. 10 CFR 1015.302 - Bases for compromise.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Bases for compromise. 1015.302 Section 1015.302 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COLLECTION OF CLAIMS OWED THE UNITED STATES Standards for the Compromise of Claims § 1015.302 Bases for compromise. (a) DOE may compromise a debt if the Government cannot...

  4. A Practical and Automated Approach to Large Area Forest Disturbance Mapping with Remote Sensing

    PubMed Central

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions. PMID:24717283

  5. Simplified 13C-urea breath test with a new infrared spectrometer for diagnosis of Helicobacter pylori infection.

    PubMed

    Chen, Tseng-Shing; Chang, Full-Young; Chen, Pang-Chi; Huang, Thomas W; Ou, Jonathan T; Tsai, Ming-Hung; Wu, Ming-Shiang; Lin, Jaw-Town

    2003-11-01

    Infrared spectrometry has correlated excellently with mass spectrometry in detecting the ratio of 13CO(2) to 12CO(2) in breath samples. The present study aimed to evaluate the accuracy of the 13C-urea breath test (13C-UBT) using a new model of infrared analyzer. A total of 600 patients who were undergoing upper endoscopy without receiving eradication therapy were entered into the study. Culture, histology, and rapid urease test on biopsies from the antrum and corpus of the stomach were used for the determination of Helicobacter pylori infection. Breath samples were collected before and 20 min after drinking 100 mg 13C-urea in 100 mL water. The optimal cutoff value was determined by the receiver operating characteristic curve. Of the 586 patients who were eligible for analysis, 369 were positive for H. pylori infection, 185 were negative for H. pylori infection, and 32 were indeterminate. When the appropriate cutoff value was set at 3.5 per thousand, a sensitivity of 97.8%, a specificity of 96.8% and an accuracy of 97.5% were obtained using the 13C-UBT. The accuracy of the 13C-UBT decreased when CO(2) concentration in the breath sample was <2%, as compared with > or = 2% (93.6%vs 97.7%), mainly because of a decrease in specificity (81.8%vs 97.7%). There were 2.7% of patients with Delta13CO(2) values that fell between 3.0-4.5 per thousand, in whom the risk of error was 47%. The 13C-UBT performed with infrared spectrometry is a highly sensitive, specific, and non-invasive method for the detection of H. pylori infection. The immediate availability of the test result and technical simplicity make it particularly effective in routine clinical practice.

  6. A practical and automated approach to large area forest disturbance mapping with remote sensing.

    PubMed

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.

  7. Green Bank Telescope active surface system

    NASA Astrophysics Data System (ADS)

    Lacasse, Richard J.

    1998-05-01

    During the design phase of the Green Bank Telescope (GBT), various means of providing an accurate surface on a large aperture paraboloid, were considered. Automated jacks supporting the primary reflector were selected as the appropriate technology since they promised greater performance and potentially lower costs than a homologous or carbon fiber design, and had certain advantages over an active secondary. The design of the active surface has presented many challenges. Since the actuators are mounted on a tipping structure, it was required that they support a significant side-load. Such devices were not readily available commercially so they had to be developed. Additional actuator requirements include low backlash, repeatable positioning, and an operational life of at least 230 years. Similarly, no control system capable of controlling the 2209 actuators was commercially available. Again a prime requirement was reliability. Maintaining was also a very important consideration. The system architecture is tree-like. An active surface 'master-computer' controls interaction with the telescope control system, and controls ancillary equipment such as power supplies and temperature monitors. Two slave computers interface with the master- computer, and each closes approximately 1100 position loops. For simplicity, the servo is an 'on/off' type, yet achieves a positioning resolution of 25 microns. Each slave computer interfaces with 4 VME I/O cards, which in turn communicate with 140 control modules. The control modules read out the positions of the actuators every 0.1 sec and control the actuators' DC motors. Initial control of the active surface will be based on an elevation dependant structural model. Later, the model will be improved by holographic observations.Surface accuracy will be improved further by using laser ranging system which will actively measure the surface figure. Several tests have been conducted to assure that the system will perform as desired when installed on the telescope. These include actuator life tests, motor life test, position transducer accuracy test, as well as positioning accuracy tests.

  8. Galaxy And Mass Assembly: automatic morphological classification of galaxies using statistical learning

    NASA Astrophysics Data System (ADS)

    Sreejith, Sreevarsha; Pereverzyev, Sergiy, Jr.; Kelvin, Lee S.; Marleau, Francine R.; Haltmeier, Markus; Ebner, Judith; Bland-Hawthorn, Joss; Driver, Simon P.; Graham, Alister W.; Holwerda, Benne W.; Hopkins, Andrew M.; Liske, Jochen; Loveday, Jon; Moffett, Amanda J.; Pimbblet, Kevin A.; Taylor, Edward N.; Wang, Lingyu; Wright, Angus H.

    2018-03-01

    We apply four statistical learning methods to a sample of 7941 galaxies (z < 0.06) from the Galaxy And Mass Assembly survey to test the feasibility of using automated algorithms to classify galaxies. Using 10 features measured for each galaxy (sizes, colours, shape parameters, and stellar mass), we apply the techniques of Support Vector Machines, Classification Trees, Classification Trees with Random Forest (CTRF) and Neural Networks, and returning True Prediction Ratios (TPRs) of 75.8 per cent, 69.0 per cent, 76.2 per cent, and 76.0 per cent, respectively. Those occasions whereby all four algorithms agree with each other yet disagree with the visual classification (`unanimous disagreement') serves as a potential indicator of human error in classification, occurring in ˜ 9 per cent of ellipticals, ˜ 9 per cent of little blue spheroids, ˜ 14 per cent of early-type spirals, ˜ 21 per cent of intermediate-type spirals, and ˜ 4 per cent of late-type spirals and irregulars. We observe that the choice of parameters rather than that of algorithms is more crucial in determining classification accuracy. Due to its simplicity in formulation and implementation, we recommend the CTRF algorithm for classifying future galaxy data sets. Adopting the CTRF algorithm, the TPRs of the five galaxy types are : E, 70.1 per cent; LBS, 75.6 per cent; S0-Sa, 63.6 per cent; Sab-Scd, 56.4 per cent, and Sd-Irr, 88.9 per cent. Further, we train a binary classifier using this CTRF algorithm that divides galaxies into spheroid-dominated (E, LBS, and S0-Sa) and disc-dominated (Sab-Scd and Sd-Irr), achieving an overall accuracy of 89.8 per cent. This translates into an accuracy of 84.9 per cent for spheroid-dominated systems and 92.5 per cent for disc-dominated systems.

  9. Quantification of transformation products of rocket fuel unsymmetrical dimethylhydrazine in soils using SPME and GC-MS.

    PubMed

    Bakaikina, Nadezhda V; Kenessov, Bulat; Ul'yanovskii, Nikolay V; Kosyakov, Dmitry S

    2018-07-01

    Determination of transformation products (TPs) of rocket fuel unsymmetrical dimethylhydrazine (UDMH) in soil is highly important for environmental impact assessment of the launches of heavy space rockets from Kazakhstan, Russia, China and India. The method based on headspace solid-phase microextraction (HS SPME) and gas chromatography-mass spectrometry is advantageous over other known methods due to greater simplicity and cost efficiency. However, accurate quantification of these analytes using HS SPME is limited by the matrix effect. In this research, we proposed using internal standard and standard addition calibrations to achieve proper combination of accuracies of the quantification of key TPs of UDMH and cost efficiency. 1-Trideuteromethyl-1H-1,2,4-triazole (MTA-d3) was used as the internal standard. Internal standard calibration allowed controlling matrix effects during quantification of 1-methyl-1H-1,2,4-triazole (MTA), N,N-dimethylformamide (DMF), and N-nitrosodimethylamine (NDMA) in soils with humus content < 1%. Using SPME at 60 °C for 15 min by 65 µm Carboxen/polydimethylsiloxane fiber, recoveries of MTA, DMF and NDMA for sandy and loamy soil samples were 91-117, 85-123 and 64-132%, respectively. For improving the method accuracy and widening the range of analytes, standard addition and its combination with internal standard calibration were tested and compared on real soil samples. The combined calibration approach provided greatest accuracies for NDMA, DMF, N-methylformamide, formamide, 1H-pyrazole, 3-methyl-1H-pyrazole and 1H-pyrazole. For determination of 1-formyl-2,2-dimethylhydrazine, 3,5-dimethylpyrazole, 2-ethyl-1H-imidazole, 1H-imidazole, 1H-1,2,4-triazole, pyrazines and pyridines, standard addition calibration is more suitable. However, the proposed approach and collected data allow using both approaches simultaneously. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Investigation of advanced phase-shifting projected fringe profilometry techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu

    1999-11-01

    The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Daniel C.; Bowman, Judd; Parsons, Aaron R.

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from –46° to –40°. Since sources atmore » similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of –0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.« less

  12. When to Pull the Trigger for the Counterattack: Simplicity versus Sophistication.

    DTIC Science & Technology

    1985-12-02

    ADA1I67 705 WHNEN TO PULL THE TRIGGER FOR THE CO$JNTERRTTRCK: vi1 SIMPLICITY VERSUS SOPHISTICATION(U) ARMY COMMAND AND, GENERAL STAFF COLL FORT...Adv’affied Military Studie SU.S. Army Command and General Staff College Fort Leavenworth, Kansas 2 December 1985 Approved ror Public Release: Distribution...OF MONITORING ORGANIZAl ION O~US ARMY CMD1AN’D AN𔃻D GENERAL If JT -10ab 6C. ADD)RESS (City. State, and ZIP Code) 7b. ADDRESS (City, State, and ZIP

  13. Anaesthesia in Sickle-cell States: A Plea for Simplicity

    PubMed Central

    Oduro, K. A.; Searle, J. F.

    1972-01-01

    505 patients with various haemoglobinopathies were given a general anaesthetic between January 1970 and February 1972. One patient with haemoglobin SC disease and one patient with sickle-cell trait (HbAS) died postoperatively. Four other patients who were sickling positive, but whose genotypes were unknown, died, one from sickle-cell crisis precipitated by haemorrhage. A simple anaesthetic technique together with good postoperative care can provide safe general anaesthesia for patients with sickle-cell states. A plea is made for simplicity in the anaesthetic management of these patients. PMID:4643397

  14. Volume simplicity constraint in the Engle-Livine-Pereira-Rovelli spin foam model

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin; Belov, Vadim

    2018-04-01

    We propose a quantum version of the quadratic volume simplicity constraint for the Engle-Livine-Pereira-Rovelli spin foam model. It relies on a formula for the volume of 4-dimensional polyhedra, depending on its bivectors and the knotting class of its boundary graph. While this leads to no further condition for the 4-simplex, the constraint becomes nontrivial for more complicated boundary graphs. We show that, in the semiclassical limit of the hypercuboidal graph, the constraint turns into the geometricity condition observed recently by several authors.

  15. Pettit on consequentialism and universalizability.

    PubMed

    Gleeson, Andrew

    2005-01-01

    Philip Pettit has argued that universalizability entails consequentialism. I criticise the argument for relying on a question-begging reading of the impartiality of universalization. A revised form of the argument can be constructed by relying on preference-satisfaction rationality, rather than on impartiality. But this revised argument succumbs to an ambiguity in the notion of a preference (or desire). I compare the revised argument to an earlier argument of Pettit's for consequentialism that appealed to the theoretical virtue of simplicity, and I raise questions about the force of appeal to notions like simplicity and rationality in moral argument.

  16. Fall Risk Assessment Tools for Elderly Living in the Community: Can We Do Better?

    PubMed

    Palumbo, Pierpaolo; Palmerini, Luca; Bandinelli, Stefania; Chiari, Lorenzo

    2015-01-01

    Falls are a common, serious threat to the health and self-confidence of the elderly. Assessment of fall risk is an important aspect of effective fall prevention programs. In order to test whether it is possible to outperform current prognostic tools for falls, we analyzed 1010 variables pertaining to mobility collected from 976 elderly subjects (InCHIANTI study). We trained and validated a data-driven model that issues probabilistic predictions about future falls. We benchmarked the model against other fall risk indicators: history of falls, gait speed, Short Physical Performance Battery (Guralnik et al. 1994), and the literature-based fall risk assessment tool FRAT-up (Cattelani et al. 2015). Parsimony in the number of variables included in a tool is often considered a proxy for ease of administration. We studied how constraints on the number of variables affect predictive accuracy. The proposed model and FRAT-up both attained the same discriminative ability; the area under the Receiver Operating Characteristic (ROC) curve (AUC) for multiple falls was 0.71. They outperformed the other risk scores, which reported AUCs for multiple falls between 0.64 and 0.65. Thus, it appears that both data-driven and literature-based approaches are better at estimating fall risk than commonly used fall risk indicators. The accuracy-parsimony analysis revealed that tools with a small number of predictors (~1-5) were suboptimal. Increasing the number of variables improved the predictive accuracy, reaching a plateau at ~20-30, which we can consider as the best trade-off between accuracy and parsimony. Obtaining the values of these ~20-30 variables does not compromise usability, since they are usually available in comprehensive geriatric assessments.

  17. Intraoperative Adductor Canal Block for Augmentation of Periarticular Injection in Total Knee Arthroplasty: A Cadaveric Study.

    PubMed

    Pepper, Andrew M; North, Trevor W; Sunderland, Adam M; Davis, Jason J

    2016-09-01

    Function is often sacrificed for pain control after total knee arthroplasty. Motor-sparing blocks, including adductor canal block (ACB) and periarticular injection (PAI), have gained interest to address this compromise. Our study evaluates the anatomic feasibility, accuracy, and safety of intraoperative ACB as an adjunct to PAI by analyzing 3 different injection orientations and needle configurations. Eleven cadaveric knees underwent a standard medial parapatellar arthrotomy. Blunt dissection through the suprapatellar recess was performed. Using a 10-mL syringe, various colors of dyed liquid gelatin were injected toward the proximal and distal adductor canal (AC) using 3 needle configurations. Medial dissection of the knee for each specimen was performed. The position of each needle and location of injected dye was identified and described relative to the AC. Accuracy of each injection orientation and/or needle configuration was different: 86% for a blunt needle in the distal AC, 57% for blunt needle in the proximal AC, and 14% for a spinal needle in the proximal AC. Puncture of the femoral artery was observed with the spinal needle 43% of the time and had the closest average proximity to the femoral artery with a distance of 5.9 mm. There were no vascular punctures using blunt needles, and the average distance from the femoral artery with proximal and distal orientation was 10.2 mm and 15.4 mm, respectively. Intraoperative ACB augmentation of PAI appears to be anatomically feasible and safe. There was decreased accuracy and increased risk of vascular puncture using a 3.5-inch spinal needle. A blunt 1.5-inch needle directed toward the distal AC had the highest accuracy while minimizing vascular injury. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Accuracy of a three-dimensional dentition model digitized from an interocclusal record using a non-contact surface scanner.

    PubMed

    Kihara, Takuya; Yoshimi, Yuki; Taji, Tsuyoshi; Murayama, Takeshi; Tanimoto, Kotaro; Nikawa, Hiroki

    2016-08-01

    For orthodontic treatment, it is important to assess the dental morphology, as well as the position and inclination of teeth. The aim of this article was to develop an efficient and accurate method for the three-dimensional (3D) imaging of the maxillary and mandibular dental morphology by measuring interocclusal records using an optical scanner. The occlusal and incisal morphology of participants was registered in the intercuspal position using a hydrophilic vinyl polysiloxane and digitized into 3D models using an optical scanner. Impressions were made of the maxilla and mandible in alginate materials in order to fabricate plaster models and created into 3D models using the optical scanner based on the principal triangulation method. The occlusal and incisal areas of the interocclusal records were retained. The buccal and lingual areas were added to these regions entirely by the 3D model of the plaster model. The accuracy of this method was evaluated for each tooth, with the dental cast 3D models used as controls. The 3D model created from the interocclusal record and the plaster model of the dental morphology was analysed in 3D software. The difference between the controls and the 3D models digitized from the interocclusal records was 0.068±0.048mm, demonstrating the accuracy of this method. The presence of severe crowding may compromise the ability to separate each tooth and digitize the dental morphology. The digitization method in this study provides sufficient accuracy to visualize the dental morphology, as well as the position and inclination of these teeth. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. Evaluation of accuracy of IHI Trigger Tool in identifying adverse drug events: a prospective observational study.

    PubMed

    das Dores Graciano Silva, Maria; Martins, Maria Auxiliadora Parreiras; de Gouvêa Viana, Luciana; Passaglia, Luiz Guilherme; de Menezes, Renata Rezende; de Queiroz Oliveira, João Antonio; da Silva, Jose Luiz Padilha; Ribeiro, Antonio Luiz Pinho

    2018-06-06

    Adverse drug events (ADEs) can seriously compromise the safety and quality of care provided to hospitalized patients, requiring the adoption of accurate methods to monitor them. We sought to prospectively evaluate the accuracy of the triggers proposed by the Institute for Healthcare Improvement (IHI) for identifying ADEs. A prospective study was conducted in a public university hospital, in 2015, with patients ≥18 years. Triggers proposed by IHI and clinical alterations suspected to be ADEs were searched daily. The number of days in which the patient was hospitalized was considered as unit of measure to evaluate the accuracy of each trigger. Three hundred patients were included in this study. Mean age was 56.3 years (standard deviation (SD) 16.0), and 154 (51.3%) were female. The frequency of patients with ADEs was 24.7% and with at least one trigger was 53.3%. From those patients who had at least one trigger, the most frequent triggers were antiemetics (57.5%) and "abrupt medication stop" (31.8%). Triggers' sensitivity ranged from 0.3 to11.8 % and the positive predictive value ranged from 1.2 to 27.3%. Specificity and negative predictive value were greater than 86%. Most patients identified by the presence of triggers did not have ADEs (64.4%). No triggers were identified in 40 (38.5%) ADEs. IHI Trigger Tool did not show good accuracy in detecting ADEs in this prospective study. The adoption of combined strategies could enhance effectiveness in identifying patient safety flaws. Further discussion might contribute to improve trigger usefulness in clinical practice. This article is protected by copyright. All rights reserved.

  20. A deformable head and neck phantom with in-vivo dosimetry for adaptive radiotherapy quality assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graves, Yan Jiang; Smith, Arthur-Allen; Mcilvena, David

    Purpose: Patients’ interfractional anatomic changes can compromise the initial treatment plan quality. To overcome this issue, adaptive radiotherapy (ART) has been introduced. Deformable image registration (DIR) is an important tool for ART and several deformable phantoms have been built to evaluate the algorithms’ accuracy. However, there is a lack of deformable phantoms that can also provide dosimetric information to verify the accuracy of the whole ART process. The goal of this work is to design and construct a deformable head and neck (HN) ART quality assurance (QA) phantom with in vivo dosimetry. Methods: An axial slice of a HN patientmore » is taken as a model for the phantom construction. Six anatomic materials are considered, with HU numbers similar to a real patient. A filled balloon inside the phantom tissue is inserted to simulate tumor. Deflation of the balloon simulates tumor shrinkage. Nonradiopaque surface markers, which do not influence DIR algorithms, provide the deformation ground truth. Fixed and movable holders are built in the phantom to hold a diode for dosimetric measurements. Results: The measured deformations at the surface marker positions can be compared with deformations calculated by a DIR algorithm to evaluate its accuracy. In this study, the authors selected a Demons algorithm as a DIR algorithm example for demonstration purposes. The average error magnitude is 2.1 mm. The point dose measurements from the in vivo diode dosimeters show a good agreement with the calculated doses from the treatment planning system with a maximum difference of 3.1% of prescription dose, when the treatment plans are delivered to the phantom with original or deformed geometry. Conclusions: In this study, the authors have presented the functionality of this deformable HN phantom for testing the accuracy of DIR algorithms and verifying the ART dosimetric accuracy. The authors’ experiments demonstrate the feasibility of this phantom serving as an end-to-end ART QA phantom.« less

  1. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    NASA Astrophysics Data System (ADS)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.

  2. Incorporating the Last Four Digits of Social Security Numbers Substantially Improves Linking Patient Data from De-identified Hospital Claims Databases.

    PubMed

    Naessens, James M; Visscher, Sue L; Peterson, Stephanie M; Swanson, Kristi M; Johnson, Matthew G; Rahman, Parvez A; Schindler, Joe; Sonneborn, Mark; Fry, Donald E; Pine, Michael

    2015-08-01

    Assess algorithms for linking patients across de-identified databases without compromising confidentiality. Hospital discharges from 11 Mayo Clinic hospitals during January 2008-September 2012 (assessment and validation data). Minnesota death certificates and hospital discharges from 2009 to 2012 for entire state (application data). Cross-sectional assessment of sensitivity and positive predictive value (PPV) for four linking algorithms tested by identifying readmissions and posthospital mortality on the assessment data with application to statewide data. De-identified claims included patient gender, birthdate, and zip code. Assessment records were matched with institutional sources containing unique identifiers and the last four digits of Social Security number (SSNL4). Gender, birthdate, and five-digit zip code identified readmissions with a sensitivity of 98.0 percent and a PPV of 97.7 percent and identified postdischarge mortality with 84.4 percent sensitivity and 98.9 percent PPV. Inclusion of SSNL4 produced nearly perfect identification of readmissions and deaths. When applied statewide, regions bordering states with unavailable hospital discharge data had lower rates. Addition of SSNL4 to administrative data, accompanied by appropriate data use and data release policies, can enable trusted repositories to link data with nearly perfect accuracy without compromising patient confidentiality. States maintaining centralized de-identified databases should add SSNL4 to data specifications. © Health Research and Educational Trust.

  3. Afferent and Efferent Aspects of Mandibular Sensorimotor Control in Adults who Stutter

    PubMed Central

    Daliri, Ayoub; Prokopenko, Roman A.; Max, Ludo

    2013-01-01

    Purpose Individuals who stutter show sensorimotor deficiencies in speech and nonspeech movements. For the mandibular system, we dissociated the sense of kinesthesia from the efferent control component to examine whether kinesthetic integrity itself is compromised in stuttering or whether deficiencies occur only when generating motor commands. Method We investigated 11 stuttering and 11 nonstuttering adults’ kinesthetic sensitivity threshold and kinesthetic accuracy for passive jaw movements as well as their minimal displacement threshold and positioning accuracy for active jaw movements. We also investigated the correlation with an anatomical index of jaw size. Results The groups showed no statistically significant differences on sensory measures for passive jaw movements. Although some stuttering individuals performed more poorly than any nonstuttering participants on the active movement tasks, between-group differences for active movements were also not statistically significant. Unlike fluent speakers, however, the stuttering group showed a statistically significant correlation between mandibular size and performance in the active and passive near-threshold tasks. Conclusions Previously reported minimal movement differences were not replicated. Instead, stuttering individuals’ performance varied with anatomical properties. These correlational results are consistent with the hypothesis that stuttering participants generate and perceive movements based on less accurate internal models of the involved neuromechanical systems. PMID:23816664

  4. Solvatochromic shifts from coupled-cluster theory embedded in density functional theory

    NASA Astrophysics Data System (ADS)

    Höfener, Sebastian; Gomes, André Severo Pereira; Visscher, Lucas

    2013-09-01

    Building on the framework recently reported for determining general response properties for frozen-density embedding [S. Höfener, A. S. P. Gomes, and L. Visscher, J. Chem. Phys. 136, 044104 (2012)], 10.1063/1.3675845, in this work we report a first implementation of an embedded coupled-cluster in density-functional theory (CC-in-DFT) scheme for electronic excitations, where only the response of the active subsystem is taken into account. The formalism is applied to the calculation of coupled-cluster excitation energies of water and uracil in aqueous solution. We find that the CC-in-DFT results are in good agreement with reference calculations and experimental results. The accuracy of calculations is mainly sensitive to factors influencing the correlation treatment (basis set quality, truncation of the cluster operator) and to the embedding treatment of the ground-state (choice of density functionals). This allows for efficient approximations at the excited state calculation step without compromising the accuracy. This approximate scheme makes it possible to use a first principles approach to investigate environment effects with specific interactions at coupled-cluster level of theory at a cost comparable to that of calculations of the individual subsystems in vacuum.

  5. Intraoperative brain tumor resection cavity characterization with conoscopic holography

    NASA Astrophysics Data System (ADS)

    Simpson, Amber L.; Burgner, Jessica; Chen, Ishita; Pheiffer, Thomas S.; Sun, Kay; Thompson, Reid C.; Webster, Robert J., III; Miga, Michael I.

    2012-02-01

    Brain shift compromises the accuracy of neurosurgical image-guided interventions if not corrected by either intraoperative imaging or computational modeling. The latter requires intraoperative sparse measurements for constraining and driving model-based compensation strategies. Conoscopic holography, an interferometric technique that measures the distance of a laser light illuminated surface point from a fixed laser source, was recently proposed for non-contact surface data acquisition in image-guided surgery and is used here for validation of our modeling strategies. In this contribution, we use this inexpensive, hand-held conoscopic holography device for intraoperative validation of our computational modeling approach to correcting for brain shift. Laser range scan, instrument swabbing, and conoscopic holography data sets were collected from two patients undergoing brain tumor resection therapy at Vanderbilt University Medical Center. The results of our study indicate that conoscopic holography is a promising method for surface acquisition since it requires no contact with delicate tissues and can characterize the extents of structures within confined spaces. We demonstrate that for two clinical cases, the acquired conoprobe points align with our model-updated images better than the uncorrected images lending further evidence that computational modeling approaches improve the accuracy of image-guided surgical interventions in the presence of soft tissue deformations.

  6. Assessment of female breast dose for thoracic cone-beam CT using MOSFET dosimeters

    PubMed Central

    Qiu, Bo; Liang, Jian; Xie, Weihao; Deng, Xiaowu; Qi, Zhenyu

    2017-01-01

    Objective: To assess the breast dose during a routine thoracic cone-beam CT (CBCT) check with the efforts to explore the possible dose reduction strategy. Materials and Methods: Metal oxide semiconductor field-effect transistor (MOSFET) dosimeters were used to measure breast surface doses during a thorax kV CBCT scan in an anthropomorphic phantom. Breast doses for different scanning protocols and breast sizes were compared. Dose reduction was attempted by using partial arc CBCT scan with bowtie filter. The impact of this dose reduction strategy on image registration accuracy was investigated. Results: The average breast surface doses were 20.02 mGy and 11.65 mGy for thoracic CBCT without filtration and with filtration, respectively. This indicates a dose reduction of 41.8% by use of bowtie filter. It was found 220° partial arc scanning significantly reduced the dose to contralateral breast (44.4% lower than ipsilateral breast), while the image registration accuracy was not compromised. Conclusions: Breast dose reduction can be achieved by using ipsilateral 220° partial arc scan with bowtie filter. This strategy also provides sufficient image quality for thorax image registration in daily patient positioning verification. PMID:28423624

  7. Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Xiaoyuan; Du, Liang; Duan, Dongliang

    GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less

  8. Automated Detection of Electroencephalography Artifacts in Human, Rodent and Canine Subjects using Machine Learning.

    PubMed

    Levitt, Joshua; Nitenson, Adam; Koyama, Suguru; Heijmans, Lonne; Curry, James; Ross, Jason T; Kamerling, Steven; Saab, Carl Y

    2018-06-23

    Electroencephalography (EEG) invariably contains extra-cranial artifacts that are commonly dealt with based on qualitative and subjective criteria. Failure to account for EEG artifacts compromises data interpretation. We have developed a quantitative and automated support vector machine (SVM)-based algorithm to accurately classify artifactual EEG epochs in awake rodent, canine and humans subjects. An embodiment of this method also enables the determination of 'eyes open/closed' states in human subjects. The levels of SVM accuracy for artifact classification in humans, Sprague Dawley rats and beagle dogs were 94.17%, 83.68%, and 85.37%, respectively, whereas 'eyes open/closed' states in humans were labeled with 88.60% accuracy. Each of these results was significantly higher than chance. Comparison with Existing Methods: Other existing methods, like those dependent on Independent Component Analysis, have not been tested in non-human subjects, and require full EEG montages, instead of only single channels, as this method does. We conclude that our EEG artifact detection algorithm provides a valid and practical solution to a common problem in the quantitative analysis and assessment of EEG in pre-clinical research settings across evolutionary spectra. Copyright © 2018. Published by Elsevier B.V.

  9. Moderate sleep deprivation produces impairments in cognitive and motor performance equivalent to legally prescribed levels of alcohol intoxication

    PubMed Central

    Williamson, A; Feyer, A.

    2000-01-01

    OBJECTIVES—To compare the relative effects on performance of sleep deprivation and alcohol.
METHODS—Performance effects were studied in the same subjects over a period of 28 hours of sleep deprivation and after measured doses of alcohol up to about 0.1% blood alcohol concentration (BAC). There were 39 subjects, 30 employees from the transport industry and nine from the army.
RESULTS—After 17-19 hours without sleep, corresponding to 2230 and 0100, performance on some tests was equivalent or worse than that at a BAC of 0.05%. Response speeds were up to 50% slower for some tests and accuracy measures were significantly poorer than at this level of alcohol. After longer periods without sleep, performance reached levels equivalent to the maximum alcohol dose given to subjects (BAC of 0.1%).
CONCLUSIONS—These findings reinforce the evidence that the fatigue of sleep deprivation is an important factor likely to compromise performance of speed and accuracy of the kind needed for safety on the road and in other industrial settings.


Keywords: sleep deprivation; performance; alcohol PMID:10984335

  10. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  11. Surmounting retraining limits in musicians' dystonia by transcranial stimulation.

    PubMed

    Furuya, Shinichi; Nitsche, Michael A; Paulus, Walter; Altenmüller, Eckart

    2014-05-01

    Abnormal cortical excitability is evident in various movement disorders that compromise fine motor control. Here we tested whether skilled finger movements can be restored in musicians with focal hand dystonia through behavioral training assisted by transcranial direct current stimulation to the motor cortex of both hemispheres. The bilateral motor cortices of 20 pianists (10 with focal dystonia, 10 healthy controls) were electrically stimulated noninvasively during bimanual mirrored finger movements. We found improvement in the rhythmic accuracy of sequential finger movements with the affected hand during and after cathodal stimulation over the affected cortex and simultaneous anodal stimulation over the unaffected cortex. The improvement was retained 4 days after intervention. Neither a stimulation with the reversed montage of electrodes nor sham stimulation yielded any improvement. Furthermore, the amount of improvement was positively correlated with the severity of the symptoms. Bihemispheric stimulation without concurrent motor training failed to improve fine motor control, underlining the importance of combined retraining and stimulation for restoring the dystonic symptoms. For the healthy pianists, none of the stimulation protocols enhanced movement accuracy. These results suggest a therapeutic potential of behavioral training assisted by bihemispheric, noninvasive brain stimulation in restoring fine motor control in focal dystonia. © 2014 American Neurological Association.

  12. Radiation Parameters of High Dose Rate Iridium -192 Sources

    NASA Astrophysics Data System (ADS)

    Podgorsak, Matthew B.

    A lack of physical data for high dose rate (HDR) Ir-192 sources has necessitated the use of basic radiation parameters measured with low dose rate (LDR) Ir-192 seeds and ribbons in HDR dosimetry calculations. A rigorous examination of the radiation parameters of several HDR Ir-192 sources has shown that this extension of physical data from LDR to HDR Ir-192 may be inaccurate. Uncertainty in any of the basic radiation parameters used in dosimetry calculations compromises the accuracy of the calculated dose distribution and the subsequent dose delivery. Dose errors of up to 0.3%, 6%, and 2% can result from the use of currently accepted values for the half-life, exposure rate constant, and dose buildup effect, respectively. Since an accuracy of 5% in the delivered dose is essential to prevent severe complications or tumor regrowth, the use of basic physical constants with uncertainties approaching 6% is unacceptable. A systematic evaluation of the pertinent radiation parameters contributes to a reduction in the overall uncertainty in HDR Ir-192 dose delivery. Moreover, the results of the studies described in this thesis contribute significantly to the establishment of standardized numerical values to be used in HDR Ir-192 dosimetry calculations.

  13. Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach

    DOE PAGES

    Fan, Xiaoyuan; Du, Liang; Duan, Dongliang

    2017-02-01

    GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less

  14. Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.

    PubMed

    Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth

    2018-01-01

    Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.

  15. Bryan's effect and anisotropic nonlinear damping

    NASA Astrophysics Data System (ADS)

    Joubert, Stephan V.; Shatalov, Michael Y.; Fay, Temple H.; Manzhirov, Alexander V.

    2018-03-01

    In 1890, G. H. Bryan discovered the following: "The vibration pattern of a revolving cylinder or bell revolves at a rate proportional to the inertial rotation rate of the cylinder or bell." We call this phenomenon Bryan's law or Bryan's effect. It is well known that any imperfections in a vibratory gyroscope (VG) affect Bryan's law and this affects the accuracy of the VG. Consequently, in this paper, we assume that all such imperfections are either minimised or eliminated by some known control method and that only damping is present within the VG. If the damping is isotropic (linear or nonlinear), then it has been recently demonstrated in this journal, using symbolic analysis, that Bryan's law remains invariant. However, it is known that linear anisotropic damping does affect Bryan's law. In this paper, we generalise Rayleigh's dissipation function so that anisotropic nonlinear damping may be introduced into the equations of motion. Using a mixture of numeric and symbolic analysis on the ODEs of motion of the VG, for anisotropic light nonlinear damping, we demonstrate (up to an approximate average), that Bryan's law is affected by any form of such damping, causing pattern drift, compromising the accuracy of the VG.

  16. Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.

    PubMed

    Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin

    2016-10-01

    Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.

  17. A Pairwise Naïve Bayes Approach to Bayesian Classification.

    PubMed

    Asafu-Adjei, Josephine K; Betensky, Rebecca A

    2015-10-01

    Despite the relatively high accuracy of the naïve Bayes (NB) classifier, there may be several instances where it is not optimal, i.e. does not have the same classification performance as the Bayes classifier utilizing the joint distribution of the examined attributes. However, the Bayes classifier can be computationally intractable due to its required knowledge of the joint distribution. Therefore, we introduce a "pairwise naïve" Bayes (PNB) classifier that incorporates all pairwise relationships among the examined attributes, but does not require specification of the joint distribution. In this paper, we first describe the necessary and sufficient conditions under which the PNB classifier is optimal. We then discuss sufficient conditions for which the PNB classifier, and not NB, is optimal for normal attributes. Through simulation and actual studies, we evaluate the performance of our proposed classifier relative to the Bayes and NB classifiers, along with the HNB, AODE, LBR and TAN classifiers, using normal density and empirical estimation methods. Our applications show that the PNB classifier using normal density estimation yields the highest accuracy for data sets containing continuous attributes. We conclude that it offers a useful compromise between the Bayes and NB classifiers.

  18. Deficits in oculomotor performance in pediatric epilepsy

    PubMed Central

    Asato, Miya R.; Nawarawong, Natalie; Hermann, Bruce; Crumrine, Patricia; Luna, Beatriz

    2010-01-01

    Summary Purpose Given evidence of limitations in neuropsychological performance in epilepsy, we probed the integrity of components of cognition, including speed of processing, response inhibition, and spatial working memory supporting executive function in pediatric epilepsy patients and matched controls. Methods A total of 44 pairs of controls and medically treated pediatric epilepsy patients with no known brain pathology completed cognitive oculomotor tasks, computerized neuropsychological testing, and psychiatric assessment. Results Patients showed slower reaction time to initiate a saccadic response compared to controls but had intact saccade accuracy. Cognitively driven responses including response inhibition were impaired in the patient group. Patients had increased incidence of comorbid psychopathology but comorbidity did not predict worse functioning compared to patients with no ADHD. Epilepsy type and medication status were not predictive of outcome. More complex neuropsychological performance was impaired in tasks requiring visual memory and sequential processing which was correlated with inhibitory control and antisaccade accuracy. Discussion Pediatric epilepsy may be associated with vulnerabilities that specifically undermine speed of processing and response inhibition but not working memory and may underlie known neuropsychological performance limitations. This particular profile of abnormalities may be associated with seizure-mediated compromises in brain maturation early in development. PMID:21087246

  19. MRI and PET Compatible Bed for Direct Co-Registration in Small Animals

    NASA Astrophysics Data System (ADS)

    Bartoli, Antonietta; Esposito, Giovanna; D'Angeli, Luca; Chaabane, Linda; Terreno, Enzo

    2013-06-01

    To obtain an accurate co-registration with stand-alone PET and MRI scanners, we developed a compatible bed system for mice and rats that enables both images to be acquired without repositioning the animals. MRI acquisitions were performed on a preclinical 7T scanner (Pharmascan, Bruker), whereas PET scans were acquired on a YAP-(S)PET (ISE s.r.l.). The bed performance was tested both on a phantom (NEMA Image Quality phantom) and in vivo (healthy rats and mice brain). Fiducial markers filled up with a drop of 18 F were visible in both modalities. Co-registration process was performed using the point-based registration technique. The reproducibility and accuracy of the co-registration were assessed using the phantom. The reproducibility of the translation distances was 0.2 mm along the z axis. On the other hand, the accuracy depended on the physical size of the phantom structures under investigation but was always lower than 4%. Regions of Interest (ROIs) drawn on the fused images were used for quantification purposes. PET and MRI intensity profiles on small structures of the phantom showed that the underestimation in activity concentration reached 90% in regions that were smaller than the PET spatial resolution, while the MRI allowed a good visualization of the 1 mm 0 rod. PET/MRI images of healthy mice and rats highlighted the expected superior capability of MRI to define brain structures. The simplicity of our developed MRI/PET compatible bed and the quality of the fused images obtained offers a promising opportunity for a future preclinical translation, particularly for neuroimaging studies.

  20. A modified elastic foundation contact model for application in 3D models of the prosthetic knee.

    PubMed

    Pérez-González, Antonio; Fenollosa-Esteve, Carlos; Sancho-Bru, Joaquín L; Sánchez-Marín, Francisco T; Vergara, Margarita; Rodríguez-Cervantes, Pablo J

    2008-04-01

    Different models have been used in the literature for the simulation of surface contact in biomechanical knee models. However, there is a lack of systematic comparisons of these models applied to the simulation of a common case, which will provide relevant information about their accuracy and suitability for application in models of the implanted knee. In this work a comparison of the Hertz model (HM), the elastic foundation model (EFM) and the finite element model (FEM) for the simulation of the elastic contact in a 3D model of the prosthetic knee is presented. From the results of this comparison it is found that although the nature of the EFM offers advantages when compared with that of the HM for its application to realistic prosthetic surfaces, and when compared with the FEM in CPU time, its predictions can differ from FEM in some circumstances. These differences are considerable if the comparison is performed for prescribed displacements, although they are less important for prescribed loads. To solve these problems a new modified elastic foundation model (mEFM) is proposed that maintains basically the simplicity of the original model while producing much more accurate results. In this paper it is shown that this new mEFM calculates pressure distribution and contact area with accuracy and short computation times for toroidal contacting surfaces. Although further work is needed to confirm its validity for more complex geometries the mEFM is envisaged as a good option for application in 3D knee models to predict prosthetic knee performance.

Top