Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Wang, Rui; Li, Yanxiao; Sun, Hui; Chen, Zengqiang
2017-11-01
The modern civil aircrafts use air ventilation pressurized cabins subject to the limited space. In order to monitor multiple contaminants and overcome the hypersensitivity of the single sensor, the paper constructs an output correction integrated sensor configuration using sensors with different measurement theories after comparing to other two different configurations. This proposed configuration works as a node in the contaminant distributed wireless sensor monitoring network. The corresponding measurement error models of integrated sensors are also proposed by using the Kalman consensus filter to estimate states and conduct data fusion in order to regulate the single sensor measurement results. The paper develops the sufficient proof of the Kalman consensus filter stability when considering the system and the observation noises and compares the mean estimation and the mean consensus errors between Kalman consensus filter and local Kalman filter. The numerical example analyses show the effectiveness of the algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Event-triggered Kalman-consensus filter for two-target tracking sensor networks.
Su, Housheng; Li, Zhenghao; Ye, Yanyan
2017-11-01
This paper is concerned with the problem of event-triggered Kalman-consensus filter for two-target tracking sensor networks. According to the event-triggered protocol and the mean-square analysis, a suboptimal Kalman gain matrix is derived and a suboptimal event-triggered distributed filter is obtained. Based on the Kalman-consensus filter protocol, all sensors which only depend on its neighbors' information can track their corresponding targets. Furthermore, utilizing Lyapunov method and matrix theory, some sufficient conditions are presented for ensuring the stability of the system. Finally, a simulation example is presented to verify the effectiveness of the proposed event-triggered protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles
2015-01-01
Introduction Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Methods Consensus on trauma care audit filters was built between twenty panelists using a Delphi technique with four anonymous, iterative surveys designed to elicit: i) trauma care processes to be measured; ii) important features of audit filters for the district-level hospital setting; and iii) potentially useful filters. Filters were ranked on a scale from 0 – 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Results Panelists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1 - 0.58; Round 2 - 0.66; Round 3 - 0.76; and Round 4 - 0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage - vital signs are recorded within 15 minutes of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation - a large bore IV was placed within 15 minutes of patient arrival; referral - if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. Conclusion This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs, the collection and reporting of prospective trauma care audit filters may be an important step toward improving care for the injured at district-level hospitals in LMICs. PMID:26492882
Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian
2014-07-01
This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles
2016-01-01
Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Consensus on trauma care audit filters was built between twenty panellists using a Delphi technique with four anonymous, iterative surveys designed to elicit: (i) trauma care processes to be measured; (ii) important features of audit filters for the district-level hospital setting; and (iii) potentially useful filters. Filters were ranked on a scale from 0 to 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Panellists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1--0.58; Round 2--0.66; Round 3--0.76; and Round 4--0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage--vital signs are recorded within 15 min of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation--a large bore IV was placed within 15 min of patient arrival; referral--if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs, the collection and reporting of prospective trauma care audit filters may be an important step towards improving care for the injured at district-level hospitals in LMICs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Decentralized Observer with a Consensus Filter for Distributed Discrete-Time Linear Systems
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Mandic, Milan
2011-01-01
This paper presents a decentralized observer with a consensus filter for the state observation of a discrete-time linear distributed systems. In this setup, each agent in the distributed system has an observer with a model of the plant that utilizes the set of locally available measurements, which may not make the full plant state detectable. This lack of detectability is overcome by utilizing a consensus filter that blends the state estimate of each agent with its neighbors' estimates. We assume that the communication graph is connected for all times as well as the sensing graph. It is proven that the state estimates of the proposed observer asymptotically converge to the actual plant states under arbitrarily changing, but connected, communication and sensing topologies. As a byproduct of this research, we also obtained a result on the location of eigenvalues, the spectrum, of the Laplacian for a family of graphs with self-loops.
Folmsbee, Martha; Lentine, Kerry Roche; Wright, Christine; Haake, Gerhard; Mcburnie, Leesa; Ashtekar, Dilip; Beck, Brian; Hutchison, Nick; Okhio-Seaman, Laura; Potts, Barbara; Pawar, Vinayak; Windsor, Helena
2014-01-01
Mycoplasma are bacteria that can penetrate 0.2 and 0.22 μm rated sterilizing-grade filters and even some 0.1 μm rated filters. Primary applications for mycoplasma filtration include large scale mammalian and bacterial cell culture media and serum filtration. The Parenteral Drug Association recognized the absence of standard industry test parameters for testing and classifying 0.1 μm rated filters for mycoplasma clearance and formed a task force to formulate consensus test parameters. The task force established some test parameters by common agreement, based upon general industry practices, without the need for additional testing. However, the culture medium and incubation conditions, for generating test mycoplasma cells, varied from filter company to filter company and was recognized as a serious gap by the task force. Standardization of the culture medium and incubation conditions required collaborative testing in both commercial filter company laboratories and in an Independent laboratory (Table I). The use of consensus test parameters will facilitate the ultimate cross-industry goal of standardization of 0.1 μm filter claims for mycoplasma clearance. However, it is still important to recognize filter performance will depend on the actual conditions of use. Therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. Mycoplasma are small bacteria that have the ability to penetrate sterilizing-grade filters. Filtration of large-scale mammalian and bacterial cell culture media is an example of an industry process where effective filtration of mycoplasma is required. The Parenteral Drug Association recognized the absence of industry standard test parameters for evaluating mycoplasma clearance filters by filter manufacturers and formed a task force to formulate such a consensus among manufacturers. The use of standardized test parameters by filter manufacturers, including the preparation of the culture broth, will facilitate the end user's evaluation of the mycoplasma clearance claims provided by filter vendors. However, it is still important to recognize filter performance will depend on the actual conditions of use; therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. © PDA, Inc. 2014.
Updating the OMERACT filter: core areas as a basis for defining core outcome sets.
Kirwan, John R; Boers, Maarten; Hewlett, Sarah; Beaton, Dorcas; Bingham, Clifton O; Choy, Ernest; Conaghan, Philip G; D'Agostino, Maria-Antonietta; Dougados, Maxime; Furst, Daniel E; Guillemin, Francis; Gossec, Laure; van der Heijde, Désirée M; Kloppenburg, Margreet; Kvien, Tore K; Landewé, Robert B M; Mackie, Sarah L; Matteson, Eric L; Mease, Philip J; Merkel, Peter A; Ostergaard, Mikkel; Saketkoo, Lesley Ann; Simon, Lee; Singh, Jasvinder A; Strand, Vibeke; Tugwell, Peter
2014-05-01
The Outcome Measures in Rheumatology (OMERACT) Filter provides guidelines for the development and validation of outcome measures for use in clinical research. The "Truth" section of the OMERACT Filter presupposes an explicit framework for identifying the relevant core outcomes that are universal to all studies of the effects of intervention effects. There is no published outline for instrument choice or development that is aimed at measuring outcome, was derived from broad consensus over its underlying philosophy, or includes a structured and documented critique. Therefore, a new proposal for defining core areas of measurement ("Filter 2.0 Core Areas of Measurement") was presented at OMERACT 11 to explore areas of consensus and to consider whether already endorsed core outcome sets fit into this newly proposed framework. Discussion groups critically reviewed the extent to which case studies of current OMERACT Working Groups complied with or negated the proposed framework, whether these observations had a more general application, and what issues remained to be resolved. Although there was broad acceptance of the framework in general, several important areas of construction, presentation, and clarity of the framework were questioned. The discussion groups and subsequent feedback highlighted 20 such issues. These issues will require resolution to reach consensus on accepting the proposed Filter 2.0 framework of Core Areas as the basis for the selection of Core Outcome Domains and hence appropriate Core Outcome Sets for clinical trials.
Oddo, Mauro; Poole, Daniele; Helbok, Raimund; Meyfroidt, Geert; Stocchetti, Nino; Bouzat, Pierre; Cecconi, Maurizio; Geeraerts, Thomas; Martin-Loeches, Ignacio; Quintard, Hervé; Taccone, Fabio Silvio; Geocadin, Romergryko G; Hemphill, Claude; Ichai, Carole; Menon, David; Payen, Jean-François; Perner, Anders; Smith, Martin; Suarez, José; Videtta, Walter; Zanier, Elisa R; Citerio, Giuseppe
2018-04-01
To report the ESICM consensus and clinical practice recommendations on fluid therapy in neurointensive care patients. A consensus committee comprising 22 international experts met in October 2016 during ESICM LIVES2016. Teleconferences and electronic-based discussions between the members of the committee subsequently served to discuss and develop the consensus process. Population, intervention, comparison, and outcomes (PICO) questions were reviewed and updated as needed, and evidence profiles generated. The consensus focused on three main topics: (1) general fluid resuscitation and maintenance in neurointensive care patients, (2) hyperosmolar fluids for intracranial pressure control, (3) fluid management in delayed cerebral ischemia after subarachnoid haemorrhage. After an extensive literature search, the principles of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system were applied to assess the quality of evidence (from high to very low), to formulate treatment recommendations as strong or weak, and to issue best practice statements when applicable. A modified Delphi process based on the integration of evidence provided by the literature and expert opinions-using a sequential approach to avoid biases and misinterpretations-was used to generate the final consensus statement. The final consensus comprises a total of 32 statements, including 13 strong recommendations and 17 weak recommendations. No recommendations were provided for two statements. We present a consensus statement and clinical practice recommendations on fluid therapy for neurointensive care patients.
Detecting Weak Spectral Lines in Interferometric Data through Matched Filtering
NASA Astrophysics Data System (ADS)
Loomis, Ryan A.; Öberg, Karin I.; Andrews, Sean M.; Walsh, Catherine; Czekala, Ian; Huang, Jane; Rosenfeld, Katherine A.
2018-04-01
Modern radio interferometers enable observations of spectral lines with unprecedented spatial resolution and sensitivity. In spite of these technical advances, many lines of interest are still at best weakly detected and therefore necessitate detection and analysis techniques specialized for the low signal-to-noise ratio (S/N) regime. Matched filters can leverage knowledge of the source structure and kinematics to increase sensitivity of spectral line observations. Application of the filter in the native Fourier domain improves S/N while simultaneously avoiding the computational cost and ambiguities associated with imaging, making matched filtering a fast and robust method for weak spectral line detection. We demonstrate how an approximate matched filter can be constructed from a previously observed line or from a model of the source, and we show how this filter can be used to robustly infer a detection significance for weak spectral lines. When applied to ALMA Cycle 2 observations of CH3OH in the protoplanetary disk around TW Hya, the technique yields a ≈53% S/N boost over aperture-based spectral extraction methods, and we show that an even higher boost will be achieved for observations at higher spatial resolution. A Python-based open-source implementation of this technique is available under the MIT license at http://github.com/AstroChem/VISIBLE.
Agapova, Maria; Bresnahan, Brian B; Higashi, Mitchell; Kessler, Larry; Garrison, Louis P; Devine, Beth
2017-02-01
The American College of Radiology develops evidence-based practice guidelines to aid appropriate utilization of radiological procedures. Panel members use expert opinion to weight trade-offs and consensus methods to rate appropriateness of imaging tests. These ratings include an equivocal range, assigned when there is disagreement about a technology's appropriateness and the evidence base is weak or for special circumstances. It is not clear how expert consensus merges with the evidence base to arrive at an equivocal rating. Quantitative benefit-risk assessment (QBRA) methods may assist decision makers in this capacity. However, many methods exist and it is not clear which methods are best suited for this application. We perform a critical appraisal of QBRA methods and propose several steps that may aid in making transparent areas of weak evidence and barriers to consensus in guideline development. We identify QBRA methods with potential to facilitate decision making in guideline development and build a decision aid for selecting among these methods. This study identified 2 families of QBRA methods suited to guideline development when expert opinion is expected to contribute substantially to decision making. Key steps to deciding among QBRA methods involve identifying specific benefit-risk criteria and developing a state-of-evidence matrix. For equivocal ratings assigned for reasons other than disagreement or weak evidence base, QBRA may not be needed. In the presence of disagreement but the absence of a weak evidence base, multicriteria decision analysis approaches are recommended; and in the presence of weak evidence base and the absence of disagreement, incremental net health benefit alone or combined with multicriteria decision analysis is recommended. Our critical appraisal further extends investigation of the strengths and limitations of select QBRA methods in facilitating diagnostic radiology clinical guideline development. The process of using the decision aid exposes and makes transparent areas of weak evidence and barriers to consensus. © 2016 John Wiley & Sons, Ltd.
Mengel, M; Sis, B; Halloran, P F
2007-10-01
The Banff process defined the diagnostic histologic lesions for renal allograft rejection and created a standardized classification system where none had existed. By correcting this deficit the process had universal impact on clinical practice and clinical and basic research. All trials of new drugs since the early 1990s benefited, because the Banff classification of lesions permitted the end point of biopsy-proven rejection. The Banff process has strengths, weaknesses, opportunities and threats (SWOT). The strength is its self-organizing group structure to create consensus. Consensus does not mean correctness: defining consensus is essential if a widely held view is to be proved wrong. The weaknesses of the Banff process are the absence of an independent external standard to test the classification; and its almost exclusive reliance on histopathology, which has inherent limitations in intra- and interobserver reproducibility, particularly at the interface between borderline and rejection, is exactly where clinicians demand precision. The opportunity lies in the new technology such as transcriptomics, which can form an external standard and can be incorporated into a new classification combining the elegance of histopathology and the objectivity of transcriptomics. The threat is the degree to which the renal transplant community will participate in and support this process.
Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.
La, Hung Manh; Sheng, Weihua
2013-04-01
In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
Updating the OMERACT filter: discrimination and feasibility.
Wells, George; Beaton, Dorcas E; Tugwell, Peter; Boers, Maarten; Kirwan, John R; Bingham, Clifton O; Boonen, Annelies; Brooks, Peter; Conaghan, Philip G; D'Agostino, Maria-Antonietta; Dougados, Maxime; Furst, Daniel E; Gossec, Laure; Guillemin, Francis; Helliwell, Philip; Hewlett, Sarah; Kvien, Tore K; Landewé, Robert B; March, Lyn; Mease, Philip J; Ostergaard, Mikkel; Simon, Lee; Singh, Jasvinder A; Strand, Vibeke; van der Heijde, Désirée M
2014-05-01
The "Discrimination" part of the OMERACT Filter asks whether a measure discriminates between situations that are of interest. "Feasibility" in the OMERACT Filter encompasses the practical considerations of using an instrument, including its ease of use, time to complete, monetary costs, and interpretability of the question(s) included in the instrument. Both the Discrimination and Reliability parts of the filter have been helpful but were agreed on primarily by consensus of OMERACT participants rather than through explicit evidence-based guidelines. In Filter 2.0 we wanted to improve this definition and provide specific guidance and advice to participants.
Weakly stationary noise filtering of satellite-acquired imagery
NASA Technical Reports Server (NTRS)
Palgen, J. J. O.; Tamches, I.; Deutsch, E. S.
1971-01-01
A type of weakly stationary noise called herringbone noise was observed in satellite imagery. The characteristics of this noise are described; a model for its simulation was developed. The model is used to degrade pictorial data for comparison with similar noise degraded Nimbus data. Two filtering methods are defined and evaluated. A user's application demonstration is discussed.
Li, Yankun; Shao, Xueguang; Cai, Wensheng
2007-04-15
Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.
Dalla Vestra, Michele; Grolla, Elisabetta; Bonanni, Luca; Pesavento, Raffaele
2018-03-01
The use of inferior vena cava filters to prevent pulmonary embolism is increasing mainly because of indications that appear to be unclearly codified and recommended. The evidence supporting this approach is often heterogeneous, and mainly based on observational studies and consensus opinions, while the insertion of an IVC filter exposes patients to the risk of complications and increases health care costs. Thus, several proposed indications for an IVC filter placement remain controversial. We attempt to review the proof on the efficacy and safety of IVC filters in several "special" clinical settings, and assess the robustness of the available evidence for any specific indication to place an IVC filter.
Weird Science: Teaching Composition in an Antifoundational World.
ERIC Educational Resources Information Center
Bernard-Donals, Michael
The antifoundational or "hermeneutic" paradigm, particularly as it has been internalized by the field of composition studies, exists in a weak version or a strong version. The weak version stresses interactive consensus-building pedagogical practices where discourse is remade by negotiating it with others. The strong version suggests…
Kim, Susan; Kahn, Philip; Robinson, Angela B; Lang, Bianca; Shulman, Andrew; Oberle, Edward J; Schikler, Kenneth; Curran, Megan Lea; Barillas-Arias, Lilliana; Spencer, Charles H; Rider, Lisa G; Huber, Adam M
2017-01-11
Juvenile dermatomyositis (JDM) is the most common form of the idiopathic inflammatory myopathies in children. A subset of children have the rash of JDM without significant weakness, and the optimal treatments for these children are unknown. The goal of this study was to describe the development of consensus clinical treatment plans (CTPs) for children with JDM who have active skin rashes, without significant muscle involvement, referred to as skin predominant JDM in this manuscript. The Children's Arthritis and Rheumatology Research Alliance (CARRA) is a North American consortium of pediatric rheumatology health care providers. CARRA members collaborated to determine consensus on typical treatments for JDM patients with skin findings without significant weakness, to develop CTPs for this subgroup of patients. We used a combination of Delphi surveys and nominal group consensus meetings to develop these CTPs. Consensus was reached on patient characteristics and outcome assessment, and CTPs were developed and finalized for patients with skin predominant JDM. Treatment option A included hydroxychloroquine alone, Treatment option B included hydroxychloroquine and methotrexate, and Treatment option C included hydroxychloroquine, methotrexate and corticosteroids. Three CTPs were developed for use in children with skin predominant JDM, which reflect typical treatment approaches. These are not considered to be specific recommendations or standard of care. Using the CARRA network and prospective data collection, we will be able to apply statistical methods in the future to allow comparisons of JDM patients following these consensus treatment plans.
Practice Governance 101, v. 2013.
Hayes, David F
2013-03-01
Consensus governance is a principal weakness leading to group malfunction and failure. Inadequate group governance produces inadequate decisions, leading to inconsistent patient care, inadequate responses to marketplace challenges, and disregard for customers and strategic partners. The effectiveness of consensus management is limited by the pervasive incomplete knowledge and personal biases of partners. Additional structural weaknesses of group behavior include information cascade, the wisdom of the crowd, groupthink, pluralistic ignorance, analysis paralysis, peer pressure, and the herding instinct. Usual corporate governance is, by necessity, the governance model of choice. Full accountability of the decider(s) is the defining requirement of all successful governance models. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
Signal intensity influences on the atomic Faraday filter.
Luo, Bin; Yin, Longfei; Xiong, Junyu; Chen, Jingbiao; Guo, Hong
2018-06-01
Previous studies of the Faraday anomalous dispersion optical filter (FADOF) mainly focus on the weak signal light filtering, without regard for the influences of the signal light intensity on the filter itself. However, in some applications the signal light is strong enough to change the filter's performance. In this work, the influences of the signal light intensity on the transmittance spectrum is experimentally investigated in a 780 nm Rb85 FADOF in both the line-center and wings operation modes. The results show that the transmittance spectrum varies significantly with the signal light intensity. As the signal light increases, some existing transmittance peaks decline, some new transmittance peaks appear, and the maximum transmittance peak frequency may change. The spectrum in strong signal lights can be quite different from those calculated by programs in the condition of weak signal lights. These results are important for applications of the FADOF in the condition of strong signal lights.
Antoniou, Michael N.; Robinson, Claire J.
2017-01-01
Cornell Alliance for Science has launched an initiative in which “citizen scientists” are called upon to evaluate studies on health risks of genetically modified (GM) crops and foods. The purpose is to establish whether the consensus on GM food safety claimed by the American Association for the Advancement of Science (AAAS) is supported by a review of the scientific literature. The Alliance’s citizen scientists are examining more than 12,000 publication abstracts to quantify how far the scientific literature supports the AAAS’s statement. We identify a number of fundamental weaknesses in the Alliance’s study design, including evaluation is based only on information provided in the publication abstract; there is a lack of clarity as to what material is included in the 12,000 study abstracts to be reviewed, since the number of appropriately designed investigations addressing GM food safety are few; there is uncertainty as to whether studies of toxic effects arising from GM crop-associated pesticides will be included; there is a lack of clarity regarding whether divergent yet equally valid interpretations of the same study will be taken into account; and there is no definition of the cutoff point for consensus or non-consensus on GM food safety. In addition, vital industry proprietary biosafety data on GM crops and associated pesticides are not publicly available and is thus cannot inform this project. Based on these weaknesses in the study design, we believe it is questionable as to whether any objective or meaningful conclusion can be drawn from the Alliance’s initiative. PMID:28447029
Antoniou, Michael N; Robinson, Claire J
2017-01-01
Cornell Alliance for Science has launched an initiative in which "citizen scientists" are called upon to evaluate studies on health risks of genetically modified (GM) crops and foods. The purpose is to establish whether the consensus on GM food safety claimed by the American Association for the Advancement of Science (AAAS) is supported by a review of the scientific literature. The Alliance's citizen scientists are examining more than 12,000 publication abstracts to quantify how far the scientific literature supports the AAAS's statement. We identify a number of fundamental weaknesses in the Alliance's study design, including evaluation is based only on information provided in the publication abstract; there is a lack of clarity as to what material is included in the 12,000 study abstracts to be reviewed, since the number of appropriately designed investigations addressing GM food safety are few; there is uncertainty as to whether studies of toxic effects arising from GM crop-associated pesticides will be included; there is a lack of clarity regarding whether divergent yet equally valid interpretations of the same study will be taken into account; and there is no definition of the cutoff point for consensus or non-consensus on GM food safety. In addition, vital industry proprietary biosafety data on GM crops and associated pesticides are not publicly available and is thus cannot inform this project. Based on these weaknesses in the study design, we believe it is questionable as to whether any objective or meaningful conclusion can be drawn from the Alliance's initiative.
Recovery of Spectrally Overlapping QPSK Signals Using a Nonlinear Optoelectronic Filter
2017-03-19
Spectrally Overlapping QPSK Signals Using a Nonlinear Optoelectronic Filter William Loh, Siva Yegnanarayanan, Kenneth E. Kolodziej, and Paul...recovery of a weak QPSK signal buried 35-dB beneath an interfering QPSK signal having an overlapping spectrum. This nonlinear optoelectronic filter ...from increased detection sensitivity. Here, we demonstrate an optoelectronic filter that enables the detection of a desired signal hidden beneath a
Structural implications of weak Ca2+ block in Drosophila cyclic nucleotide–gated channels
Lam, Yee Ling; Zeng, Weizhong; Derebe, Mehabaw Getahun
2015-01-01
Calcium permeability and the concomitant calcium block of monovalent ion current (“Ca2+ block”) are properties of cyclic nucleotide–gated (CNG) channel fundamental to visual and olfactory signal transduction. Although most CNG channels bear a conserved glutamate residue crucial for Ca2+ block, the degree of block displayed by different CNG channels varies greatly. For instance, the Drosophila melanogaster CNG channel shows only weak Ca2+ block despite the presence of this glutamate. We previously constructed a series of chimeric channels in which we replaced the selectivity filter of the bacterial nonselective cation channel NaK with a set of CNG channel filter sequences and determined that the resulting NaK2CNG chimeras displayed the ion selectivity and Ca2+ block properties of the parent CNG channels. Here, we used the same strategy to determine the structural basis of the weak Ca2+ block observed in the Drosophila CNG channel. The selectivity filter of the Drosophila CNG channel is similar to that of most other CNG channels except that it has a threonine at residue 318 instead of a proline. We constructed a NaK chimera, which we called NaK2CNG-Dm, which contained the Drosophila selectivity filter sequence. The high resolution structure of NaK2CNG-Dm revealed a filter structure different from those of NaK and all other previously investigated NaK2CNG chimeric channels. Consistent with this structural difference, functional studies of the NaK2CNG-Dm chimeric channel demonstrated a loss of Ca2+ block compared with other NaK2CNG chimeras. Moreover, mutating the corresponding threonine (T318) to proline in Drosophila CNG channels increased Ca2+ block by 16 times. These results imply that a simple replacement of a threonine for a proline in Drosophila CNG channels has likely given rise to a distinct selectivity filter conformation that results in weak Ca2+ block. PMID:26283200
Structural implications of weak Ca2+ block in Drosophila cyclic nucleotide-gated channels.
Lam, Yee Ling; Zeng, Weizhong; Derebe, Mehabaw Getahun; Jiang, Youxing
2015-09-01
Calcium permeability and the concomitant calcium block of monovalent ion current ("Ca(2+) block") are properties of cyclic nucleotide-gated (CNG) channel fundamental to visual and olfactory signal transduction. Although most CNG channels bear a conserved glutamate residue crucial for Ca(2+) block, the degree of block displayed by different CNG channels varies greatly. For instance, the Drosophila melanogaster CNG channel shows only weak Ca(2+) block despite the presence of this glutamate. We previously constructed a series of chimeric channels in which we replaced the selectivity filter of the bacterial nonselective cation channel NaK with a set of CNG channel filter sequences and determined that the resulting NaK2CNG chimeras displayed the ion selectivity and Ca(2+) block properties of the parent CNG channels. Here, we used the same strategy to determine the structural basis of the weak Ca(2+) block observed in the Drosophila CNG channel. The selectivity filter of the Drosophila CNG channel is similar to that of most other CNG channels except that it has a threonine at residue 318 instead of a proline. We constructed a NaK chimera, which we called NaK2CNG-Dm, which contained the Drosophila selectivity filter sequence. The high resolution structure of NaK2CNG-Dm revealed a filter structure different from those of NaK and all other previously investigated NaK2CNG chimeric channels. Consistent with this structural difference, functional studies of the NaK2CNG-Dm chimeric channel demonstrated a loss of Ca(2+) block compared with other NaK2CNG chimeras. Moreover, mutating the corresponding threonine (T318) to proline in Drosophila CNG channels increased Ca(2+) block by 16 times. These results imply that a simple replacement of a threonine for a proline in Drosophila CNG channels has likely given rise to a distinct selectivity filter conformation that results in weak Ca(2+) block. © 2015 Lam et al.
Consensus formation times in anisotropic societies
NASA Astrophysics Data System (ADS)
Neirotti, Juan
2017-06-01
We developed a statistical mechanics model to study the emergence of a consensus in societies of adapting, interacting agents constrained by a social rule B . In the mean-field approximation, we find that if the agents' interaction H0 is weak, all agents adapt to the social rule B , with which they form a consensus; however, if the interaction is sufficiently strong, a consensus is built against the established status quo. We observed that, after a transient time αt, agents asymptotically approach complete consensus by following a path whereby they neglect their neighbors' opinions on socially neutral issues (i.e., issues for which the society as a whole has no opinion). αt is found to be finite for most values of the interagent interaction H0 and temperature T , with the exception of the values H0=1 , T →∞ , and the region determined by the inequalities β <2 and 2 β H0<1 +β -√{1 +2 β -β2 } , for which consensus, with respect to B , is never reached.
Development of an autonomous video rendezvous and docking system, phase 3
NASA Technical Reports Server (NTRS)
Tietz, J. C.
1984-01-01
Field-of-view limitations proved troublesome. Higher resolution was required. Side thrusters were too weak. The strategy logic was improved and the Kalman filter was augmented to estimate target attitude and tumble rate. Two separate filters were used. The new filter estimates target attitude and angular momentum. The Newton-Raphson iteration improves image interpretation.
Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma
2003-02-28
The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects.
Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms
NASA Astrophysics Data System (ADS)
Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.
2006-03-01
In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.
New quality measure for SNP array based CNV detection.
Macé, A; Tuke, M A; Beckmann, J S; Lin, L; Jacquemont, S; Weedon, M N; Reymond, A; Kutalik, Z
2016-11-01
Only a few large systematic studies have evaluated the impact of copy number variants (CNVs) on common diseases. Several million individuals have been genotyped on single nucleotide variation arrays, which could be used for genome-wide CNVs association studies. However, CNV calls remain prone to false positives and only empirical filtering strategies exist in the literature. To overcome this issue, we defined a new quality score (QS) estimating the probability of a CNV called by PennCNV to be confirmed by other software. Out-of-sample comparison showed that the correlation between the consensus CNV status and the QS is twice as high as it is for any previously proposed CNV filters. ROC curves displayed an AUC higher than 0.8 and simulations showed an increase up to 20% in statistical power when using QS in comparison to other filtering strategies. Superior performance was confirmed also for alternative consensus CNV definition and through improving known CNV-trait associations. http://goo.gl/T6yuFM CONTACT: zoltan.kutalik@unil.ch or aurelien@mace@unil.chSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Tugwell, Peter; Boers, Maarten; D'Agostino, Maria-Antonietta; Beaton, Dorcas; Boonen, Annelies; Bingham, Clifton O; Choy, Ernest; Conaghan, Philip G; Dougados, Maxime; Duarte, Catia; Furst, Daniel E; Guillemin, Francis; Gossec, Laure; Heiberg, Turid; van der Heijde, Désirée M; Hewlett, Sarah; Kirwan, John R; Kvien, Tore K; Landewé, Robert B; Mease, Philip J; Østergaard, Mikkel; Simon, Lee; Singh, Jasvinder A; Strand, Vibeke; Wells, George
2014-05-01
The Outcome Measures in Rheumatology (OMERACT) Filter provides guidelines for the development and validation of outcome measures for use in clinical research. The "Truth" section of the OMERACT Filter requires that criteria be met to demonstrate that the outcome instrument meets the criteria for content, face, and construct validity. Discussion groups critically reviewed a variety of ways in which case studies of current OMERACT Working Groups complied with the Truth component of the Filter and what issues remained to be resolved. The case studies showed that there is broad agreement on criteria for meeting the Truth criteria through demonstration of content, face, and construct validity; however, several issues were identified that the Filter Working Group will need to address. These issues will require resolution to reach consensus on how Truth will be assessed for the proposed Filter 2.0 framework, for instruments to be endorsed by OMERACT.
Recovery after critical illness: putting the puzzle together-a consensus of 29.
Azoulay, Elie; Vincent, Jean-Louis; Angus, Derek C; Arabi, Yaseen M; Brochard, Laurent; Brett, Stephen J; Citerio, Giuseppe; Cook, Deborah J; Curtis, Jared Randall; Dos Santos, Claudia C; Ely, E Wesley; Hall, Jesse; Halpern, Scott D; Hart, Nicholas; Hopkins, Ramona O; Iwashyna, Theodore J; Jaber, Samir; Latronico, Nicola; Mehta, Sangeeta; Needham, Dale M; Nelson, Judith; Puntillo, Kathleen; Quintel, Michael; Rowan, Kathy; Rubenfeld, Gordon; Van den Berghe, Greet; Van der Hoeven, Johannes; Wunsch, Hannah; Herridge, Margaret
2017-12-05
In this review, we seek to highlight how critical illness and critical care affect longer-term outcomes, to underline the contribution of ICU delirium to cognitive dysfunction several months after ICU discharge, to give new insights into ICU acquired weakness, to emphasize the importance of value-based healthcare, and to delineate the elements of family-centered care. This consensus of 29 also provides a perspective and a research agenda about post-ICU recovery.
Essentially nonoscillatory postprocessing filtering methods
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1992-01-01
High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.
Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma
2003-01-01
The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects. PMID:12639334
Varying influence of environmental gradients on vegetation patterns across biomes
NASA Astrophysics Data System (ADS)
Dahlin, K.; Asner, G. P.; Mascaro, J.; Taylor, P.
2016-12-01
Environmental gradients, like elevation, slope, aspect, and soil properties, filter vegetation types at the local scale. These `environmental filters' create conditions that are conducive to the success or failure of different plant types, influencing landscape-scale heterogeneity in taxonomic diversity, functional diversity, biomass accumulation, greenness, and more. Niche-based models implicitly assume that environmental filtering is the dominant process controlling plant distributions. While environmental filtering is a well understood process, its importance relative to other drivers of heterogeneity, like disturbance, human impacts, and plant-animal interactions, remains unknown and likely varies between biomes. Here we synthesize results from several studies using data from the Carnegie Airborne Observatory - a fused LiDAR and imaging spectroscopy system - that mapped a vegetation patterns in multiple biomes and associated these with environmental gradients. The study sites range from Panama to California, and the patterns range from aboveground carbon to foliar chemistry. We show that at fine spatial scales environmental filtering is a strong predictor of aboveground biomass in a dry system (Jasper Ridge Biological Preserve, California - Dahlin et al 2012) but a weak predictor of plant functional traits in that same system (Dahlin et al 2014), a weak predictor of aboveground carbon in the tropics (Barro Colorado Island, Panama - Mascaro et al 2011; Osa Peninsula, Costa Rica - Taylor et al 2015), and a weak predictor of greenness (NDVI) in a disturbed dry system (Santa Cruz Island, California - Dahlin et al 2014). Collectively, these results suggest that while environmental filtering is an important driver of landscape-scale heterogeneity, it is not the only, or often even the most important, driver for many of these systems and patterns.
[Medical image segmentation based on the minimum variation snake model].
Zhou, Changxiong; Yu, Shenglin
2007-02-01
It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.
Dynamic spin filtering at the Co/Alq3 interface mediated by weakly coupled second layer molecules.
Droghetti, Andrea; Thielen, Philip; Rungger, Ivan; Haag, Norman; Großmann, Nicolas; Stöckl, Johannes; Stadtmüller, Benjamin; Aeschlimann, Martin; Sanvito, Stefano; Cinchetti, Mirko
2016-08-31
Spin filtering at organic-metal interfaces is often determined by the details of the interaction between the organic molecules and the inorganic magnets used as electrodes. Here we demonstrate a spin-filtering mechanism based on the dynamical spin relaxation of the long-living interface states formed by the magnet and weakly physisorbed molecules. We investigate the case of Alq3 on Co and, by combining two-photon photoemission experiments with electronic structure theory, show that the observed long-time spin-dependent electron dynamics is driven by molecules in the second organic layer. The interface states formed by physisorbed molecules are not spin-split, but acquire a spin-dependent lifetime, that is the result of dynamical spin-relaxation driven by the interaction with the Co substrate. Such spin-filtering mechanism has an important role in the injection of spin-polarized carriers across the interface and their successive hopping diffusion into successive molecular layers of molecular spintronics devices.
Dynamic spin filtering at the Co/Alq3 interface mediated by weakly coupled second layer molecules
Droghetti, Andrea; Thielen, Philip; Rungger, Ivan; Haag, Norman; Großmann, Nicolas; Stöckl, Johannes; Stadtmüller, Benjamin; Aeschlimann, Martin; Sanvito, Stefano; Cinchetti, Mirko
2016-01-01
Spin filtering at organic-metal interfaces is often determined by the details of the interaction between the organic molecules and the inorganic magnets used as electrodes. Here we demonstrate a spin-filtering mechanism based on the dynamical spin relaxation of the long-living interface states formed by the magnet and weakly physisorbed molecules. We investigate the case of Alq3 on Co and, by combining two-photon photoemission experiments with electronic structure theory, show that the observed long-time spin-dependent electron dynamics is driven by molecules in the second organic layer. The interface states formed by physisorbed molecules are not spin-split, but acquire a spin-dependent lifetime, that is the result of dynamical spin-relaxation driven by the interaction with the Co substrate. Such spin-filtering mechanism has an important role in the injection of spin-polarized carriers across the interface and their successive hopping diffusion into successive molecular layers of molecular spintronics devices. PMID:27578395
Dynamic spin filtering at the Co/Alq3 interface mediated by weakly coupled second layer molecules
NASA Astrophysics Data System (ADS)
Droghetti, Andrea; Thielen, Philip; Rungger, Ivan; Haag, Norman; Großmann, Nicolas; Stöckl, Johannes; Stadtmüller, Benjamin; Aeschlimann, Martin; Sanvito, Stefano; Cinchetti, Mirko
2016-08-01
Spin filtering at organic-metal interfaces is often determined by the details of the interaction between the organic molecules and the inorganic magnets used as electrodes. Here we demonstrate a spin-filtering mechanism based on the dynamical spin relaxation of the long-living interface states formed by the magnet and weakly physisorbed molecules. We investigate the case of Alq3 on Co and, by combining two-photon photoemission experiments with electronic structure theory, show that the observed long-time spin-dependent electron dynamics is driven by molecules in the second organic layer. The interface states formed by physisorbed molecules are not spin-split, but acquire a spin-dependent lifetime, that is the result of dynamical spin-relaxation driven by the interaction with the Co substrate. Such spin-filtering mechanism has an important role in the injection of spin-polarized carriers across the interface and their successive hopping diffusion into successive molecular layers of molecular spintronics devices.
Developing core outcome measurement sets for clinical trials: OMERACT filter 2.0.
Boers, Maarten; Kirwan, John R; Wells, George; Beaton, Dorcas; Gossec, Laure; d'Agostino, Maria-Antonietta; Conaghan, Philip G; Bingham, Clifton O; Brooks, Peter; Landewé, Robert; March, Lyn; Simon, Lee S; Singh, Jasvinder A; Strand, Vibeke; Tugwell, Peter
2014-07-01
Lack of standardization of outcome measures limits the usefulness of clinical trial evidence to inform health care decisions. This can be addressed by agreeing on a minimum core set of outcome measures per health condition, containing measures relevant to patients and decision makers. Since 1992, the Outcome Measures in Rheumatology (OMERACT) consensus initiative has successfully developed core sets for many rheumatologic conditions, actively involving patients since 2002. Its expanding scope required an explicit formulation of its underlying conceptual framework and process. Literature searches and iterative consensus process (surveys and group meetings) of stakeholders including patients, health professionals, and methodologists within and outside rheumatology. To comprehensively sample patient-centered and intervention-specific outcomes, a framework emerged that comprises three core "Areas," namely Death, Life Impact, and Pathophysiological Manifestations; and one strongly recommended Resource Use. Through literature review and consensus process, core set development for any specific health condition starts by identifying at least one core "Domain" within each of the Areas to formulate the "Core Domain Set." Next, at least one applicable measurement instrument for each core Domain is identified to formulate a "Core Outcome Measurement Set." Each instrument must prove to be truthful (valid), discriminative, and feasible. In 2012, 96% of the voting participants (n=125) at the OMERACT 11 consensus conference endorsed this model and process. The OMERACT Filter 2.0 explicitly describes a comprehensive conceptual framework and a recommended process to develop core outcome measurement sets for rheumatology likely to be useful as a template in other areas of health care. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Weilin; Wang, Runqiu; Chen, Yangkang
2018-05-01
Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.
Distributed Estimation using Bayesian Consensus Filtering
2014-06-06
Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs,” IEEE J. Sel. Topics Signal Process...Dimakis, S. Kar, J. Moura, M. Rabbat, and A. Scaglione, “ Gossip algorithms for distributed signal processing,” Proc. of the IEEE, vol. 98, no. 11, pp
When a domain isn’t a domain, and why it’s important to properly filter proteins in databases
Towse, Clare-Louise; Daggett, Valerie
2013-01-01
Summary Membership in a protein domain database does not a domain make; a feature we realized when generating a consensus view of protein fold space with our Consensus Domain Dictionary (CDD). This dictionary was used to select representative structures for characterization of the protein dynameome: the Dynameomics initiative. Through this endeavor we rejected a surprising 40% of the 1695 folds in the CDD as being non-autonomous folding units. Although some of this was due to the challenges of grouping similar fold topologies, the dissonance between the cataloguing and structural qualification of protein domains remains surprising. Another potential factor is previously overlooked intrinsic disorder; predicted estimates suggest 40% of proteins to have either local or global disorder. One thing is clear, filtering a structural database and ensuring a consistent definition for protein domains is crucial, and caution is prescribed when generalizations of globular domains are drawn from unfiltered protein domain datasets. PMID:23108912
Climate tolerances and trait choices shape continental patterns of urban tree biodiversity
G. Darrel Jenerette; Lorraine W. Clarke; Meghan L. Avolio; Diane E. Pataki; Thomas W. Gillespie; Stephanie Pincetl; Dave J. Nowak; Lucy R. Hutyra; Melissa McHale; Joseph P. McFadden; Michael Alonzo
2016-01-01
Aim. We propose and test a climate tolerance and trait choice hypothesis of urban macroecological variation in which strong filtering associated with low winter temperatures restricts urban biodiversity while weak filtering associated with warmer temperatures and irrigation allows dispersal of species from a global source pool, thereby...
Attenuation of harmonic noise in vibroseis data using Simulated Annealing
NASA Astrophysics Data System (ADS)
Sharma, S. P.; Tildy, Peter; Iranpour, Kambiz; Scholtz, Peter
2009-04-01
Processing of high productivity vibroseis seismic data (such as slip-sweep acquisition records) suffers from the well known disadvantage of harmonic distortion. Harmonic distortions are observed after cross-correlation of the recorded seismic signal with the pilot sweep and affect the signals in negative time (before the actual strong reflection event). Weak reflection events of the earlier sweeps falling in the negative time window of the cross-correlation sequence are being masked by harmonic distortions. Though the amplitude of the harmonic distortion is small (up to 10-20 %) compared to the fundamental amplitude of the reflection events, but it is significant enough to mask weak reflected signals. Elimination of harmonic noise due to source signal distortion from the cross-correlated seismic trace is a challenging task since the application of vibratory sources started and it still needs improvement. An approach has been worked out that minimizes the level of harmonic distortion by designing the signal similar to the harmonic distortion. An arbitrary length filter is optimized using the Simulated Annealing global optimization approach to design a harmonic signal. The approach deals with the convolution of a ratio trace (ratio of the harmonics with respect to the fundamental sweep) with the correlated "positive time" recorded signal and an arbitrary filter. Synthetic data study has revealed that this procedure of designing a signal similar to the desired harmonics using convolution of a suitable filter with theoretical ratio of harmonics with fundamental sweep helps in reducing the problem of harmonic distortion. Once we generate a similar signal for a vibroseis source using an optimized filter, then, this filter could be used to generate harmonics, which can be subtracted from the main cross-correlated trace to get the better, undistorted image of the subsurface. Designing the predicted harmonics to reduce the energy in the trace by considering weak reflection and observed harmonics together yields the desired result (resolution of weak reflected signal from the harmonic distortion). As optimization steps proceeds forward it is possible to observe from the difference plots of desired and predicted harmonics how weak reflections evolved from the harmonic distortion gradually during later iterations of global optimization. The procedure is applied in resolving weak reflections from a number of traces considered together. For a more precise design of harmonics SA procedure needs longer computation time which is impractical to deal with voluminous seismic data. However, the objective of resolving weak reflection signal in the strong harmonic noise can be achieved with fast computation using faster cooling schedule and less number of iterations and number of moves in simulated annealing procedure. This process could help in reducing the harmonics distortion and achieving the objective of resolving the lost weak reflection events in the cross-correlated seismic traces. Acknowledgements: The research was supported under the European Marie Curie Host Fellowships for Transfer of Knowledge (TOK) Development Host Scheme (contract no. MTKD-CT-2006-042537).
Global tracking of space debris via CPHD and consensus
NASA Astrophysics Data System (ADS)
Wei, Baishen; Nener, Brett; Liu, Weifeng; Ma, Liang
2017-05-01
Space debris tracking is of great importance for safe operation of spacecraft. This paper presents an algorithm that achieves global tracking of space debris with a multi-sensor network. The sensor network has unknown and possibly time-varying topology. A consensus algorithm is used to effectively counteract the effects of data incest. Gaussian Mixture-Cardinalized Probability Hypothesis Density (GM-CPHD) filtering is used to estimate the state of the space debris. As an example of the method, 45 clusters of sensors are used to achieve global tracking. The performance of the proposed approach is demonstrated by simulation experiments.
Consensus statement: immunonutrition and exercise
USDA-ARS?s Scientific Manuscript database
In this section we evaluate the strengths and weaknesses of various biomarkers used in studies by nutritional immunologists (Table 1). An important consideration is that exercise immunologists often perform investigative work in the field, away from the rigorously controlled laboratory environment; ...
Racial discrimination: how not to do it.
Hochman, Adam
2013-09-01
The UNESCO Statements on Race of the early 1950s are understood to have marked a consensus amongst natural scientists and social scientists that 'race' is a social construct. Human biological diversity was shown to be predominantly clinal, or gradual, not discreet, and clustered, as racial naturalism implied. From the seventies social constructionists added that the vast majority of human genetic diversity resides within any given racialised group. While social constructionism about race became the majority consensus view on the topic, social constructionism has always had its critics. Sesardic (2010) has compiled these criticisms into one of the strongest defences of racial naturalism in recent times. In this paper I argue that Sesardic equivocates between two versions of racial naturalism: a weak version and a strong version. As I shall argue, the strong version is not supported by the relevant science. The weak version, on the other hand, does not contrast properly with what social constructionists think about 'race'. By leaning on this weak view Sesardic's racial naturalism intermittently gains an appearance of plausibility, but this view is too weak to revive racial naturalism. As Sesardic demonstrates, there are new arguments for racial naturalism post-Human Genome Diversity Project. The positive message behind my critique is how to be a social constructionist about race in the post-genomic era. Copyright © 2013 Elsevier Ltd. All rights reserved.
Laser Velocimeter for Studies of Microgravity Combustion Flowfields
NASA Technical Reports Server (NTRS)
Varghese, P. L.; Jagodzinski, J.
2001-01-01
We are currently developing a velocimeter based on modulated filtered Rayleigh scattering (MFRS), utilizing diode lasers to make measurements in an unseeded gas or flame. MFRS is a novel variation of filtered Rayleigh scattering, utilizing modulation absorption spectroscopy to detect a strong absorption of a weak Rayleigh scattered signal. A rubidium (Rb) vapor filter is used to provide the relatively strong absorption and semiconductor diode lasers generate the relatively weak Rayleigh scattered signal. Alkali metal vapors have a high optical depth at modest vapor pressures, and their narrow linewidth is ideally suited for high-resolution velocimetry; the compact, rugged construction of diode lasers makes them ideally suited for microgravity experimentation. Molecular Rayleigh scattering of laser light simplifies flow measurements as it obviates the complications of flow-seeding. The MFRS velocimeter should offer an attractive alternative to comparable systems, providing a relatively inexpensive means of measuring velocity in unseeded flows and flames.
Gariepy, Cheryl E; Heyman, Melvin B; Lowe, Mark E; Pohl, John F; Werlin, Steven L; Wilschanski, Michael; Barth, Bradley; Fishman, Douglas S; Freedman, Steven D; Giefer, Matthew J; Gonska, Tanja; Himes, Ryan; Husain, Sohail Z; Morinville, Veronique D; Ooi, Chee Y; Schwarzenberg, Sarah J; Troendle, David M; Yen, Elizabeth; Uc, Aliye
2017-01-01
Acute recurrent pancreatitis (ARP) and chronic pancreatitis (CP) have been diagnosed in children at increasing rates during the past decade. As pediatric ARP and CP are still relatively rare conditions, little quality evidence is available on which to base the diagnosis and determination of etiology. The aim of the study was to review the current state of the literature regarding the etiology of these disorders and to developed a consensus among a panel of clinically active specialists caring for children with these disorders to help guide the diagnostic evaluation and identify areas most in need of future research. A systematic review of the literature was performed and scored for quality, followed by consensus statements developed and scored by each individual in the group for level of agreement and strength of the supporting data using a modified Delphi method. Scores were analyzed for the level of consensus achieved by the group. The panel reached consensus on 27 statements covering the definitions of pediatric ARP and CP, evaluation for potential etiologies of these disorders, and long-term monitoring. Statements for which the group reached consensus to make no recommendation or could not reach consensus are discussed. This consensus helps define the minimal diagnostic evaluation and monitoring of children with ARP and CP. Even in areas in which we reached consensus, the quality of the evidence is weak, highlighting the need for further research. Improved understanding of the underlying cause will facilitate treatment development and targeting.
Gariepy, Cheryl E.; Heyman, Melvin B.; Lowe, Mark E.; Pohl, John F.; Werlin, Steven L.; Wilschanski, Michael; Barth, Bradley; Fishman, Douglas S.; Freedman, Steven D.; Giefer, Matthew J.; Gonska, Tanja; Himes, Ryan; Husain, Sohail Z.; Morinville, Veronique D.; Ooi, Chee Y.; Schwarzenberg, Sarah Jane; Troendle, David M.; Yen, Elizabeth; Uc, Aliye
2016-01-01
Acute recurrent pancreatitis (ARP) and chronic pancreatitis (CP) have been diagnosed in children at increasing rates over the past decade. However, as pediatric ARP and CP are still relatively rare conditions, little quality evidence is available on which to base the diagnosis and determination of etiology. Objectives: To review the current state of the literature regarding the etiology of these disorders and to developed a consensus among a panel of clinically active specialists caring for children with these disorders to help guide the diagnostic evaluation and identify areas most in need of future research. Methods: A systematic review of the literature was performed and scored for quality, then consensus statements developed and scored by each individual in the group for level of agreement and strength of the supporting data using a modified Delphi method. Scores were analyzed for the level of consensus achieved by the group. Results: The panel reached consensus on 27 statements covering the definitions of pediatric ARP and CP, evaluation for potential etiologies of these disorders, and long-term monitoring. Statements for which the group reached consensus to make no recommendation or could not reach consensus are discussed. Conclusion: This consensus helps define the minimal diagnostic evaluation and monitoring of children with ARP and CP. Even in areas in which we reached consensus, the quality of the evidence is weak, highlighting the need for further research. Improved understanding of the underlying cause will facilitate treatment development and targeting. PMID:27782962
The identification of criteria to evaluate prehospital trauma care using the Delphi technique.
Rosengart, Matthew R; Nathens, Avery B; Schiff, Melissa A
2007-03-01
Current trauma system performance improvement emphasizes hospital- and patient-based outcome measures such as mortality and morbidity, with little focus upon the processes of prehospital trauma care. Little data exist to suggest which prehospital criteria should serve as potential filters. This study identifies the most important filters for auditing prehospital trauma care using a Delphi technique to achieve consensus of expert opinion. Experts in trauma care from the United States (n = 81) were asked to generate filters of potential utility in monitoring the prehospital aspect of the trauma system, and were then required to rank these questions in order of importance to identify those of greatest importance. Twenty-eight filters ranking in the highest tertile are proposed. The majority (54%) pertains to aspects of emergency medical services, which comprise 7 of the top 10 (70%) filters. Triage filters follow in priority ranking, comprising 29% of the final list. Filters concerning interfacility transfers and transportation ranked lowest. This study identifies audit filters representing the most important aspects of prehospital trauma care that merit continued evaluation and monitoring. A subsequent trial addressing the utility of these filters could potentially enhance the sensitivity of identifying deviations in prehospital care, standardize the performance improvement process, and translate into an improvement in patient care and outcome.
High flow ceramic pot filters.
van Halem, D; van der Laan, H; Soppe, A I A; Heijman, S G J
2017-11-01
Ceramic pot filters are considered safe, robust and appropriate technologies, but there is a general consensus that water revenues are limited due to clogging of the ceramic element. The objective of this study was to investigate the potential of high flow ceramic pot filters to produce more water without sacrificing their microbial removal efficacy. High flow pot filters, produced by increasing the rice husk content, had a higher initial flow rate (6-19 L h -1 ), but initial LRVs for E. coli of high flow filters was slightly lower than for regular ceramic pot filters. This disadvantage was, however, only temporarily as the clogging in high flow filters had a positive effect on the LRV for E. coli (from below 1 to 2-3 after clogging). Therefore, it can be carefully concluded that regular ceramic pot filters perform better initially, but after clogging, the high flow filters have a higher flow rate as well as a higher LRV for E. coli. To improve the initial performance of new high flow filters, it is recommended to further utilize residence time of the water in the receptacle, since additional E. coli inactivation was observed during overnight storage. Although a relationship was observed between flow rate and LRV of MS2 bacteriophages, both regular and high flow filters were unable to reach over 2 LRV. Copyright © 2017 Elsevier Ltd. All rights reserved.
The design of preamplifier and ADC circuit base on weak e-optical signal
NASA Astrophysics Data System (ADS)
Fen, Leng; Ying-ping, Yang; Ya-nan, Yu; Xiao-ying, Xu
2011-02-01
Combined with the demand of the process of weak e-optical signal in QPD detection system, the article introduced the circuit principle of deigning preamplifier and ADC circuit with I/V conversion, instrumentation amplifier, low-pass filter and 16-bit A/D transformation. At the same time the article discussed the circuit's noise suppression and isolation according to the characteristics of the weak signal, and gave the method of software rectification. Finally, tested the weak signal with keithley2000, and got a good effect.
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Shu-qing; Feng, Zhong-ying; Liu, Xiao-fei; Gao, Jin-yue
2016-12-01
To obtain the weak signal light detection from the high background noise, we present a theoretical study on the ultra-narrow bandwidth tunable atomic filter with electromagnetically induced transparency. In a three-level Λ -type atomic system in the rubidium D1 line, the bandwidth of the EIT atomic filter is narrowed to ~6.5 \\text{MHz} . And the single peak transmission of the filter can be up to 86% . Moreover, the transmission wavelength can be tuned by changing the coupling light frequency. This theoretical scheme can also be applied to other alkali atomic systems.
Tracks detection from high-orbit space objects
NASA Astrophysics Data System (ADS)
Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.
2017-05-01
The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.
Werner, Ricardo N; Jacobs, Anja; Rosumeck, Stefanie; Nast, Alexander
2014-12-01
Guideline development requires considerable time and financial resources. New technical devices such as software for online conferences may help to reduce time and financial efforts of guidelines development. The present survey may serve as an explorative pilot for a future study to determine the technical feasibility, acceptability and possible weaknesses of online consensus conferences for clinical guidelines development. An anonymous online survey was conducted among participants in the online consensus conference of the International League of Dermatological Societies (ILDS) Guidelines for the Treatment of Actinic Keratosis. The majority of participants reported no technical problems with the participation in the online consensus conference; one participant had substantial technical problems accountable to a regional telephone breakdown. The majority of participants would not have preferred a traditional face-to-face conference, and all participants rated online consensus conferences for international guidelines as absolutely acceptable. Rates of acceptance were particularly high among those participants with prior experience with consensus conferences. Certain aspects, particularly the possibilities of debating, were rated as possibly superior in face-to-face conferences by some participants. The data from the online survey indicate that online consensus conferences may be an appropriate alternative to traditional face-to-face consensus conferences, especially within the frame of international guidelines that would require high travel costs and time. Further research is necessary to confirm the data from this explorative pilot study. © 2014 John Wiley & Sons, Ltd.
Grötzinger, Stefan W.; Alam, Intikhab; Ba Alawi, Wail; Bajic, Vladimir B.; Stingl, Ulrich; Eppinger, Jörg
2014-01-01
Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile's genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the Integrated Data Warehouse of Microbial Genomes (INDIGO) data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes) may translate into false positives when searching for specific functions. The Profile and Pattern Matching (PPM) strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO)-terms (which represent enzyme function profiles) and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern). The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2577 enzyme commission (E.C.) numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from six different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter) and PROSITE IDs (pattern filter). Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits) are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns) are present. Scripts for annotation, as well as for the PPM algorithm, are available through the INDIGO website. PMID:24778629
Method and apparatus for evaluating structural weakness in polymer matrix composites
Wachter, E.A.; Fisher, W.G.
1996-01-09
A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image. 6 figs.
Method and apparatus for evaluating structural weakness in polymer matrix composites
Wachter, Eric A.; Fisher, Walter G.
1996-01-01
A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image.
Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter
NASA Astrophysics Data System (ADS)
Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.
2018-04-01
Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.
Comparison of weighting techniques for acoustic full waveform inversion
NASA Astrophysics Data System (ADS)
Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo
2017-12-01
To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Jin, Chao; Lee, Jay
2017-01-01
Minimum Entropy Deconvolution (MED) filter, which is a non-parametric approach for impulsive signature detection, has been widely studied recently. Although the merits of the MED filter are manifold, this method tends to over highlight the dominant peaks and its performance becomes less stable when strong noise exists. In order to better understand the behavior of the MED filter, this study first investigated the mathematical fundamentals of the MED filter and then explained the reason why the MED filter tends to over highlight the dominant peaks. In order to pursue finer solutions for weak impulsive signature enhancement, the Convolutional Sparse Filter (CSF) is originally proposed in this work and the derivation of the CSF is presented in details. The superiority of the proposed CSF over the MED filter is validated by both simulated data and experimental data. The results demonstrate that CSF is an effective method for impulsive signature enhancement that could be applied in rotating machines for incipient fault detection.
Reaction times to weak test lights. [psychophysics biological model
NASA Technical Reports Server (NTRS)
Wandell, B. A.; Ahumada, P.; Welsh, D.
1984-01-01
Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.
Extracting spatial information from large aperture exposures of diffuse sources
NASA Technical Reports Server (NTRS)
Clarke, J. T.; Moos, H. W.
1981-01-01
The spatial properties of large aperture exposures of diffuse emission can be used both to investigate spatial variations in the emission and to filter out camera noise in exposures of weak emission sources. Spatial imaging can be accomplished both parallel and perpendicular to dispersion with a resolution of 5-6 arc sec, and a narrow median filter running perpendicular to dispersion across a diffuse image selectively filters out point source features, such as reseaux marks and fast particle hits. Spatial information derived from observations of solar system objects is presented.
Enhancement of IVR images by combining an ICA shrinkage filter with a multi-scale filter
NASA Astrophysics Data System (ADS)
Chen, Yen-Wei; Matsuo, Kiyotaka; Han, Xianhua; Shimizu, Atsumoto; Shibata, Koichi; Mishina, Yukio; Mukuta, Yoshihiro
2007-11-01
Interventional Radiology (IVR) is an important technique to visualize and diagnosis the vascular disease. In real medical application, a weak x-ray radiation source is used for imaging in order to reduce the radiation dose, resulting in a low contrast noisy image. It is important to develop a method to smooth out the noise while enhance the vascular structure. In this paper, we propose to combine an ICA Shrinkage filter with a multiscale filter for enhancement of IVR images. The ICA shrinkage filter is used for noise reduction and the multiscale filter is used for enhancement of vascular structure. Experimental results show that the quality of the image can be dramatically improved without any blurring in edge by the proposed method. Simultaneous noise reduction and vessel enhancement have been achieved.
NASA Astrophysics Data System (ADS)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.
2018-05-01
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.
A miniature filter on a suspended substrate with a two-sided pattern of strip conductors
NASA Astrophysics Data System (ADS)
Belyaev, B. A.; Voloshin, A. S.; Bulavchuk, A. S.; Galeev, R. G.
2016-06-01
A miniature bandpass filter of new design with original stripline resonators on suspended substrate has been studied. The proposed filters of third to sixth order are distinguished for their high frequency-selective properties and mush smaller size in comparison to analogs. It is shown that a broad stopband extending above three-fold central bandpass frequency is determined by weak coupling of resonators at resonances of the second and third modes. A prototype sixth-order filter with a central frequency of 1 GHz, manufactured on a ceramic substrate with dielectric permittivity ɛ = 80, has contour dimensions of 36.6 × 4.8 × 0.5 mm3. Parametric synthesis of the filter, based on electrodynamic 3D model simulations, showed quite good agreement with the results of measurements.
Impact of consensus statements and reimbursement on vena cava filter utilization.
Desai, Sapan S; Naddaf, Abdallah; Pan, James; Hood, Douglas; Hodgson, Kim J
2016-08-01
Pulmonary embolism is the third most common cause of death in hospitalized patients. Vena cava filters (VCFs) are indicated in patients with venous thromboembolism with a contraindication to anticoagulation. Prophylactic indications are still controversial. However, the utilization of VCFs during the past 15 years may have been affected by societal recommendations and reimbursement rates. The aim of this study was to evaluate the impact of societal guidelines and reimbursement on national trends in VCF placement from 1998 to 2012. The National Inpatient Sample was used to identify patients who underwent VCF placement between 1998 and 2012. VCF placement yearly rates were evaluated. Societal guidelines and consensus statements were identified using a PubMed search. Reimbursement rates for VCF were determined on the basis of published Medicare reports. Statistical analysis was completed using descriptive statistics, Fisher exact test, and trend analysis using the Mann-Kendall test and considered significant for P < .05. The use of VCFs increased 350% between January 1998 and January 2008. Consensus statements in favor of VCFs published by the Eastern Association for the Surgery of Trauma (July 2002) and the Society of Interventional Radiology (March 2006) were temporally associated with a significant 138% and 122% increase in the use of VCFs, respectively (P = .014 and P = .023, respectively). The American College of Chest Physicians guidelines (February 2008 and 2012) discouraging the use of VCFs were preceded by an initial stabilization in the use of VCFs between 2008 and 2012, followed by a 16% decrease in use starting in March 2012 (P = .38). Changes in Medicare reimbursement were not followed by a change in VCF implantation rates. There is a temporal association between the societal guidelines' recommendations regarding VCF placement and the actual rates of insertion. More uniform consensus statements from multiple societies along with the use of level I evidence may be required to lead to a definitive change in practice. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Data Quality Assessment Methods for the Eastern Range 915 MHz Wind Profiler Network
NASA Technical Reports Server (NTRS)
Lambert, Winifred C.; Taylor, Gregory E.
1998-01-01
The Eastern Range installed a network of five 915 MHz Doppler Radar Wind Profilers with Radio Acoustic Sounding Systems in the Cape Canaveral Air Station/Kennedy Space Center area to provide three-dimensional wind speed and direction and virtual temperature estimates in the boundary layer. The Applied Meteorology Unit, staffed by ENSCO, Inc., was tasked by the 45th Weather Squadron, the Spaceflight Meteorology Group, and the National Weather Service in Melbourne, Florida to investigate methods which will help forecasters assess profiler network data quality when developing forecasts and warnings for critical ground, launch and landing operations. Four routines were evaluated in this study: a consensus time period check a precipitation contamination check, a median filter, and the Weber-Wuertz (WW) algorithm. No routine was able to effectively flag suspect data when used by itself. Therefore, the routines were used in different combinations. An evaluation of all possible combinations revealed two that provided the best results. The precipitation contamination and consensus time routines were used in both combinations. The median filter or WW was used as the final routine in the combinations to flag all other suspect data points.
Wei, Ning-Ning; Hamza, Adel
2014-01-27
We present an efficient and rational ligand/structure shape-based virtual screening approach combining our previous ligand shape-based similarity SABRE (shape-approach-based routines enhanced) and the 3D shape of the receptor binding site. Our approach exploits the pharmacological preferences of a number of known active ligands to take advantage of the structural diversities and chemical similarities, using a linear combination of weighted molecular shape density. Furthermore, the algorithm generates a consensus molecular-shape pattern recognition that is used to filter and place the candidate structure into the binding pocket. The descriptor pool used to construct the consensus molecular-shape pattern consists of four dimensional (4D) fingerprints generated from the distribution of conformer states available to a molecule and the 3D shapes of a set of active ligands computed using SABRE software. The virtual screening efficiency of SABRE was validated using the Database of Useful Decoys (DUD) and the filtered version (WOMBAT) of 10 DUD targets. The ligand/structure shape-based similarity SABRE algorithm outperforms several other widely used virtual screening methods which uses the data fusion of multiscreening tools (2D and 3D fingerprints) and demonstrates a superior early retrieval rate of active compounds (EF(0.1%) = 69.0% and EF(1%) = 98.7%) from a large size of ligand database (∼95,000 structures). Therefore, our developed similarity approach can be of particular use for identifying active compounds that are similar to reference molecules and predicting activity against other targets (chemogenomics). An academic license of the SABRE program is available on request.
MobiDB-lite: fast and highly specific consensus prediction of intrinsic disorder in proteins.
Necci, Marco; Piovesan, Damiano; Dosztányi, Zsuzsanna; Tosatto, Silvio C E
2017-05-01
Intrinsic disorder (ID) is established as an important feature of protein sequences. Its use in proteome annotation is however hampered by the availability of many methods with similar performance at the single residue level, which have mostly not been optimized to predict long ID regions of size comparable to domains. Here, we have focused on providing a single consensus-based prediction, MobiDB-lite, optimized for highly specific (i.e. few false positive) predictions of long disorder. The method uses eight different predictors to derive a consensus which is then filtered for spurious short predictions. Consensus prediction is shown to outperform the single methods when annotating long ID regions. MobiDB-lite can be useful in large-scale annotation scenarios and has indeed already been integrated in the MobiDB, DisProt and InterPro databases. MobiDB-lite is available as part of the MobiDB database from URL: http://mobidb.bio.unipd.it/. An executable can be downloaded from URL: http://protein.bio.unipd.it/mobidblite/. silvio.tosatto@unipd.it. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
ERIC Educational Resources Information Center
Hourigan, Mairead; Leavy, Aisling M.
2017-01-01
Although research suggests that many pre-service mathematics education programmes are weak interventions having a negligible effect on student teachers' knowledge, beliefs and attitudes, there is consensus that programmes that model and engage student teachers in reform teaching and learning approaches have the potential to effect positive change…
The Principal's Role in Setting School Climate (for School Improvement).
ERIC Educational Resources Information Center
Hall, Gene E.
Given that principals play a role in setting school climate, this paper focuses on how this actually happens. First, the paper explores different criteria and variables as possible frameworks for defining the term "climate." This task is complicated by problems in identifying consensus findings due to weak variable definitions and lack…
ERIC Educational Resources Information Center
Edwards, Oliver W.; Taub, Gordon E.
2016-01-01
Research indicates the primary difference between strong and weak readers is their phonemic awareness skills. However, there is no consensus regarding which specific components of phonemic awareness contribute most robustly to reading comprehension. In this study, the relationship among sound blending, sound segmentation, and reading comprehension…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-31
... initiative, and the idea, method or approach would be ineligible for assistance under a recent, current, or... strengths and weaknesses of the applications. ii. An overall consensus rating will be determined for each... unsolicited proposal and represents a unique or innovative idea, method, or approach which would not be...
A Conceptual Framework for Examining School Finance Reform Options for the State of Ohio.
ERIC Educational Resources Information Center
Monk, David H.; Theobald, Neil D.
2001-01-01
Interviews involving 58 Ohio stakeholders focused on detailing state K-12 education goals, identifying the current system's strengths and weaknesses, and discussing the financial/political viability of potential school funding strategies. Consensus emerged regarding the role of property taxes, local control, the foundation formula rationale, and…
Ultra-Wideband Harmonic Radar for Locating Radio-Frequency Electronics
2015-03-01
13 Fig. A-1 Measured S-parameters for the MiniCircuits SLP ...MiniCircuits SLP -1000+ lowpass filters. The relatively weak signal at f0 is increased by 40 dB by the Amplifier Research AR4W1000 power amplifier. The...Fig. A-1 Measured S-parameters for the MiniCircuits SLP -1000+ lowpass filter pair Fig. A-2 Measured S-parameters for the Amplifier Research
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
Johnson, G G; Geiduschek, E P
1977-04-05
The interaction of the phage SPO1 protein transcription factor 1 (TF1), with DNA has been analyzed by membrane filter binding and by sedimentation methods. Substantially specific binding of TF1 to helical SPO1 DNA can be demonstrated by nitrocellulose filter-binding assays at relatively low ionic strength (0.08). However, TF1-DNA complexes dissociate and reequilibrate relatively rapidly and this makes filter-binding assays unsuitable for quantitative measurements of binding equilibra. Accordingly, the sedimentation properties of TF1-DNA complexes have been explored and a short-column centrifugation assay has been elaborated for quantitative measurements. Preferential binding of TF1 to the hydroxymethyluracil-containing SPO1 DNA has also been demonstrated by short-column centrifugation. TF1 binds relatively weakly and somewhat cooperatively to SPO1 DNA at many sites; TF1-DNA complexes dissociate and reequilibrate rapidly. At 20 degrees C in 0.01 M phosphate, pH 7.5, 0.15 KC1, one molecule of TF1 can bind to approximately every 60 nucleotide pairs of SPO1 DNA.
Nonlinear Attitude Filtering Methods
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Crassidis, John L.; Cheng, Yang
2005-01-01
This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.
Review of Evidence for Adult Diabetic Ketoacidosis Management Protocols.
Tran, Tara T T; Pease, Anthony; Wood, Anna J; Zajac, Jeffrey D; Mårtensson, Johan; Bellomo, Rinaldo; Ekinci, Elif I I
2017-01-01
Diabetic ketoacidosis (DKA) is an endocrine emergency with associated risk of morbidity and mortality. Despite this, DKA management lacks strong evidence due to the absence of large randomised controlled trials (RCTs). To review existing studies investigating inpatient DKA management in adults, focusing on intravenous (IV) fluids; insulin administration; potassium, bicarbonate, and phosphate replacement; and DKA management protocols and impact of DKA resolution rates on outcomes. Ovid Medline searches were conducted with limits "all adult" and published between "1973 to current" applied. National consensus statements were also reviewed. Eligibility was determined by two reviewers' assessment of title, abstract, and availability. A total of 85 eligible articles published between 1973 and 2016 were reviewed. The salient findings were (i) Crystalloids are favoured over colloids though evidence is lacking. The preferred crystalloid and hydration rates remain contentious. (ii) IV infusion of regular human insulin is preferred over the subcutaneous route or rapid acting insulin analogues. Administering an initial IV insulin bolus before low-dose insulin infusions obviates the need for supplemental insulin. Consensus-statements recommend fixed weight-based over "sliding scale" insulin infusions although evidence is weak. (iii) Potassium replacement is imperative although no trials compare replacement rates. (iv) Bicarbonate replacement offers no benefit in DKA with pH > 6.9. In severe metabolic acidosis with pH < 6.9, there is lack of both data and consensus regarding bicarbonate administration. (v) There is no evidence that phosphate replacement offers outcome benefits. Guidelines consider replacement appropriate in patients with cardiac dysfunction, anaemia, respiratory depression, or phosphate levels <0.32 mmol/L. (vi) Upon resolution of DKA, subcutaneous insulin is recommended with IV insulin infusions ceased with an overlap of 1-2 h. (vii) DKA resolution rates are often used as end points in studies, despite a lack of evidence that rapid resolution improves outcome. (viii) Implementation of DKA protocols lacks strong evidence for adherence but may lead to improved clinical outcomes. There are major deficiencies in evidence for optimal management of DKA. Current practice is guided by weak evidence and consensus opinion. All aspects of DKA management require RCTs to affirm or redirect management and formulate consensus evidence-based practice to improve patient outcomes.
Spin selective filtering of polariton condensate flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, T.; Department of Materials Science and Technology, University of Crete, 71003 Heraklion, Crete; Antón, C.
2015-07-06
Spin-selective spatial filtering of propagating polariton condensates, using a controllable spin-dependent gating barrier, in a one-dimensional semiconductor microcavity ridge waveguide is reported. A nonresonant laser beam provides the source of propagating polaritons, while a second circularly polarized weak beam imprints a spin dependent potential barrier, which gates the polariton flow and generates polariton spin currents. A complete spin-based control over the blocked and transmitted polaritons is obtained by varying the gate polarization.
Quantitative consensus of supervised learners for diffuse lung parenchymal HRCT patterns
NASA Astrophysics Data System (ADS)
Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.
2013-03-01
Automated lung parenchymal classification usually relies on supervised learning of expert chosen regions representative of the visually differentiable HRCT patterns specific to different pathologies (eg. emphysema, ground glass, honey combing, reticular and normal). Considering the elusiveness of a single most discriminating similarity measure, a plurality of weak learners can be combined to improve the machine learnability. Though a number of quantitative combination strategies exist, their efficacy is data and domain dependent. In this paper, we investigate multiple (N=12) quantitative consensus approaches to combine the clusters obtained with multiple (n=33) probability density-based similarity measures. Our study shows that hypergraph based meta-clustering and probabilistic clustering provides optimal expert-metric agreement.
[Types of ventilatory support and their indications in amyotrophic lateral sclerosis].
Perrin, C
2006-06-01
Respiratory muscle weakness represents the major cause of mortality in patients with amyotrophic lateral sclerosis (ALS). As a result, ventilatory assistance is an important part of disease management. Nowadays, noninvasive ventilation (NIV) has become the first choice modality for most patients and represents an alternative to tracheostomy intermittent positive-pressure ventilation. Although, some consensus guidelines have been proposed to initiate NIV in patients with restrictive chronic respiratory failure, these criteria are discussed regarding ALS. While the current consensus recommends that NIV may be used in symptomatic patients with hypercapnia or forced vital capacity<50p.cent of predicted value, early use of NIV is proposed in the literature and reported in this paper.
Word Learning in 6-Month-Olds: Fast Encoding-Weak Retention
ERIC Educational Resources Information Center
Friedrich, Manuela; Friederici, Angela D.
2011-01-01
There has been general consensus that initial word learning during early infancy is a slow and time-consuming process that requires very frequent exposure, whereas later in development, infants are able to quickly learn a novel word for a novel meaning. From the perspective of memory maturation, this shift in behavioral development might represent…
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Technical and social evaluation of arsenic mitigation in rural Bangladesh.
Shafiquzzaman, Md; Azam, Md Shafiul; Mishima, Iori; Nakajima, Jun
2009-10-01
Technical and social performances of an arsenic-removal technology--the sono arsenic filter--in rural areas of Bangladesh were investigated. Results of arsenic field-test showed that filtered water met the Bangladesh standard (< 50 microg/L) after two years of continuous use. A questionnaire was administrated among 198 sono arsenic filter-user and 230 non-user families. Seventy-two percent of filters (n = 198) were working at the time of the survey. Another 28% of the filters were abandoned due to breakage. The abandonment percentage (28%) was lower than other mitigation options currently implemented in Bangladesh. Households were reluctant to repair the broken filters on their own. High cost, problems with maintenance of filters, weak sludge-disposal guidance, and slow flow rate were the other demerits of the filter. These results indicate that the implementation approaches of the sono arsenic filter suffered from lack of ownership and long-term sustainability. Continuous use of arsenic-contaminated tubewells by the non-user households demonstrated the lack of alternative water supply in the survey area. Willingness of households to pay (about 30%) and preference of household filter (50%) suggest the need to develop a low-cost household arsenic filter. Development of community-based organization would be also necessary to implement a long-term, sustainable plan for household-based technology.
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; ...
2018-05-15
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; et al.
2018-01-26
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
Individual and Joint Expert Judgments as Reference Standards in Artifact Detection
Verduijn, Marion; Peek, Niels; de Keizer, Nicolette F.; van Lieshout, Erik-Jan; de Pont, Anne-Cornelie J.M.; Schultz, Marcus J.; de Jonge, Evert; de Mol, Bas A.J.M.
2008-01-01
Objective To investigate the agreement among clinical experts in their judgments of monitoring data with respect to artifacts, and to examine the effect of reference standards that consist of individual and joint expert judgments on the performance of artifact filters. Design Individual judgments of four physicians, a majority vote judgment, and a consensus judgment were obtained for 30 time series of three monitoring variables: mean arterial blood pressure (ABPm), central venous pressure (CVP), and heart rate (HR). The individual and joint judgments were used to tune three existing automated filtering methods and to evaluate the performance of the resulting filters. Measurements The interrater agreement was calculated in terms of positive specific agreement (PSA). The performance of the artifact filters was quantified in terms of sensitivity and positive predictive value (PPV). Results PSA values between 0.33 and 0.85 were observed among clinical experts in their selection of artifacts, with relatively high values for CVP data. Artifact filters developed using judgments of individual experts were found to moderately generalize to new time series and other experts; sensitivity values ranged from 0.40 to 0.60 for ABPm and HR filters (PPV: 0.57–0.84), and from 0.63 to 0.80 for CVP filters (PPV: 0.71–0.86). A higher performance value for the filters was found for the three variable types when joint judgments were used for tuning the filtering methods. Conclusion Given the disagreement among experts in their individual judgment of monitoring data with respect to artifacts, the use of joint reference standards obtained from multiple experts is recommended for development of automatic artifact filters. PMID:18096912
2ND International Workshop on Adaptive Optics for Industry and Medicine.
2000-02-08
The spots are well-separated, and there are only very weak interference peaks between adjacent spots, so identification of the spots is easy and...for transmission through an interference filter, a polarizing filter, the SLM, and a 12 mm diameter aperture to mask the active area in the SLM. A... interfere greatly with the visibility of the primary image. However, as the SLM power increases so does the contrast of the secondary images and
A regularization of the Burgers equation using a filtered convective velocity
NASA Astrophysics Data System (ADS)
Norgard, Greg; Mohseni, Kamran
2008-08-01
This paper examines the properties of a regularization of the Burgers equation in one and multiple dimensions using a filtered convective velocity, which we have dubbed as the convectively filtered Burgers (CFB) equation. A physical motivation behind the filtering technique is presented. An existence and uniqueness theorem for multiple dimensions and a general class of filters is proven. Multiple invariants of motion are found for the CFB equation which are shown to be shared with the viscous and inviscid Burgers equations. Traveling wave solutions are found for a general class of filters and are shown to converge to weak solutions of the inviscid Burgers equation with the correct wave speed. Numerical simulations are conducted in 1D and 2D cases where the shock behavior, shock thickness and kinetic energy decay are examined. Energy spectra are also examined and are shown to be related to the smoothness of the solutions. This approach is presented with the hope of being extended to shock regularization of compressible Euler equations.
A filtering method to generate high quality short reads using illumina paired-end technology.
Eren, A Murat; Vineis, Joseph H; Morrison, Hilary G; Sogin, Mitchell L
2013-01-01
Consensus between independent reads improves the accuracy of genome and transcriptome analyses, however lack of consensus between very similar sequences in metagenomic studies can and often does represent natural variation of biological significance. The common use of machine-assigned quality scores on next generation platforms does not necessarily correlate with accuracy. Here, we describe using the overlap of paired-end, short sequence reads to identify error-prone reads in marker gene analyses and their contribution to spurious OTUs following clustering analysis using QIIME. Our approach can also reduce error in shotgun sequencing data generated from libraries with small, tightly constrained insert sizes. The open-source implementation of this algorithm in Python programming language with user instructions can be obtained from https://github.com/meren/illumina-utils.
Reconciled Rat and Human Metabolic Networks for Comparative Toxicogenomics and Biomarker Predictions
2017-02-08
compared with the original human GPR rules (Supplementary Fig. 3). The consensus-based approach for filtering orthology annotations was designed to...ARTICLE Received 29 Jan 2016 | Accepted 13 Dec 2016 | Published 8 Feb 2017 Reconciled rat and human metabolic networks for comparative toxicogenomics...predictions in response to 76 drugs. We validate comparative predictions for xanthine derivatives with new experimental data and literature- based evidence
Moreno-Pino, Mario; De la Iglesia, Rodrigo; Valdivia, Nelson; Henríquez-Castilo, Carlos; Galán, Alexander; Díez, Beatriz; Trefault, Nicole
2016-07-01
Spatial environmental heterogeneity influences diversity of organisms at different scales. Environmental filtering suggests that local environmental conditions provide habitat-specific scenarios for niche requirements, ultimately determining the composition of local communities. In this work, we analyze the spatial variation of microbial communities across environmental gradients of sea surface temperature, salinity and photosynthetically active radiation and spatial distance in Fildes Bay, King George Island, Antarctica. We hypothesize that environmental filters are the main control of the spatial variation of these communities. Thus, strong relationships between community composition and environmental variation and weak relationships between community composition and spatial distance are expected. Combining physical characterization of the water column, cell counts by flow cytometry, small ribosomal subunit genes fingerprinting and next generation sequencing, we contrast the abundance and composition of photosynthetic eukaryotes and heterotrophic bacterial local communities at a submesoscale. Our results indicate that the strength of the environmental controls differed markedly between eukaryotes and bacterial communities. Whereas eukaryotic photosynthetic assemblages responded weakly to environmental variability, bacteria respond promptly to fine-scale environmental changes in this polar marine system. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Agar, S. M.; Kunreuther, H.
2005-12-01
Policy formulation for the mitigation and management of risks posed by natural hazards requires that governments confront difficult decisions for resource allocation and be able to justify their spending. Governments also need to recognize when spending offers little improvement and the circumstances in which relatively small amounts of spending can make substantial differences. Because natural hazards can have detrimental impacts on local and regional economies, patterns of economic development can also be affected by spending decisions for disaster mitigation. This paper argues that by mapping interdependencies among physical, social and economic factors, governments can improve resource allocation to mitigate the risks of natural hazards while improving economic development on local and regional scales. Case studies of natural hazards in Turkey have been used to explore specific "filters" that act to modify short- and long-term outcomes. Pre-event filters can prevent an event from becoming a natural disaster or change a routine event into a disaster. Post-event filters affect both short and long-term recovery and development. Some filters cannot be easily modified by spending (e.g., rural-urban migration) but others (e.g., land-use practices) provide realistic spending targets. Net social benefits derived from spending, however, will also depend on the ways by which filters are linked, or so-called "interdependencies". A single weak link in an interdependent system, such as a power grid, can trigger a cascade of failures. Similarly, weak links in social and commercial networks can send waves of disruption through communities. Conversely, by understanding the positive impacts of interdependencies, spending can be targeted to maximize net social benefits while mitigating risks and improving economic development. Detailed information on public spending was not available for this study but case studies illustrate how networks of interdependent filters can modify social benefits and costs. For example, spending after the 1992 Erzincan earthquake targeted local businesses but limited alternative employment, labor losses and diminished local markets all contributed to economic stagnation. Spending after the 1995 Dinar earthquake provided rent subsidies, supporting a major exodus from the town. Consequently many local people were excluded from reconstruction decisions and benefits offered by reconstruction funds. After the 1999 Marmara earthquakes, a 3-year economic decline in Yalova illustrates the vulnerability of local economic stability to weak regulation enforcement by a few agents. A resource allocation framework indicates that government-community relations, lack of economic diversification, beliefs, and compensation are weak links for effective spending. Stronger positive benefits could be achieved through spending to target land-use regulation enforcement, labor losses, time-critical needs of small businesses, and infrastructure. While the impacts of the Marmara earthquakes were devastating, strong commercial networks and international interests helped to re-establish the regional economy. Interdependencies may have helped to drive a recovery. Smaller events in eastern Turkey, however, can wipe out entire communities and can have long-lasting impacts on economic development. These differences may accelerate rural to urban migration and perpetuate regional economic divergence in the country. 1: Research performed in the Wharton MBA Program, Univ. of Pennsylvania.
Indications, complications and outcomes of inferior vena cava filters: A retrospective study.
Wassef, Andrew; Lim, Wendy; Wu, Cynthia
2017-05-01
Inferior vena cava filters are used to prevent embolization of a lower extremity deep vein thrombosis when the risk of pulmonary embolism is thought to be high. However, evidence is lacking for their benefit and guidelines differ on the recommended indications for filter insertion. The study aim was to determine the reasons for inferior vena cava filter placement and subsequent complication rate. A retrospective cohort of patients receiving inferior vena cava filters in Edmonton, Alberta, Canada from 2007 to 2011. Main outcome was the indication of inferior vena cava filter insertion. Other measures include baseline demographic and medical history of patients, clinical outcomes and filter retrieval rates. 464 patients received inferior vena cava filters. An acute deep vein thrombosis with a contraindication to anticoagulation was the indication for 206 (44.4%) filter insertions. No contraindication to anticoagulation could be identified in 20.7% of filter placements. 30.6% were placed in those with active cancer, in which mortality was significantly higher. Only 38.9% of retrievable filters were successfully retrieved. Inferior vena cava filters were placed frequently in patients with weak or no guideline-supported indications for filter placement and in up to 20% of patients with no contraindication to anticoagulation. The high rates of cancer and the high mortality rate of the cohort raise the possibility that some filters are placed inappropriately in end of life settings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chiral filtration-induced spin/valley polarization in silicene line defects
NASA Astrophysics Data System (ADS)
Ren, Chongdan; Zhou, Benhu; Sun, Minglei; Wang, Sake; Li, Yunfang; Tian, Hongyu; Lu, Weitao
2018-06-01
The spin/valley polarization in silicene with extended line defects is investigated according to the chiral filtration mechanism. It is shown that the inner-built quantum Hall pseudo-edge states with identical chirality can serve as a chiral filter with a weak magnetic field and that the transmission process is restrained/strengthened for chiral states with reversed/identical chirality. With two parallel line defects, which act as natural chiral filtration, the filter effect is greatly enhanced, and 100% spin/valley polarization can be achieved.
ConsDock: A new program for the consensus analysis of protein-ligand interactions.
Paul, Nicodème; Rognan, Didier
2002-06-01
Protein-based virtual screening of chemical libraries is a powerful technique for identifying new molecules that may interact with a macromolecular target of interest. Because of docking and scoring limitations, it is more difficult to apply as a lead optimization method because it requires that the docking/scoring tool is able to propose as few solutions as possible and all of them with a very good accuracy for both the protein-bound orientation and the conformation of the ligand. In the present study, we present a consensus docking approach (ConsDock) that takes advantage of three widely used docking tools (Dock, FlexX, and Gold). The consensus analysis of all possible poses generated by several docking tools is performed sequentially in four steps: (i) hierarchical clustering of all poses generated by a docking tool into families represented by a leader; (ii) definition of all consensus pairs from leaders generated by different docking programs; (iii) clustering of consensus pairs into classes, represented by a mean structure; and (iv) ranking the different means starting from the most populated class of consensus pairs. When applied to a test set of 100 protein-ligand complexes from the Protein Data Bank, ConsDock significantly outperforms single docking with respect to the docking accuracy of the top-ranked pose. In 60% of the cases investigated here, ConsDock was able to rank as top solution a pose within 2 A RMSD of the X-ray structure. It can be applied as a postprocessing filter to either single- or multiple-docking programs to prioritize three-dimensional guided lead optimization from the most likely docking solution. Copyright 2002 Wiley-Liss, Inc.
Tejera, Eduardo; Cruz-Monteagudo, Maykel; Burgos, Germán; Sánchez, María-Eugenia; Sánchez-Rodríguez, Aminael; Pérez-Castillo, Yunierkis; Borges, Fernanda; Cordeiro, Maria Natália Dias Soeiro; Paz-Y-Miño, César; Rebelo, Irene
2017-08-08
Preeclampsia is a multifactorial disease with unknown pathogenesis. Even when recent studies explored this disease using several bioinformatics tools, the main objective was not directed to pathogenesis. Additionally, consensus prioritization was proved to be highly efficient in the recognition of genes-disease association. However, not information is available about the consensus ability to early recognize genes directly involved in pathogenesis. Therefore our aim in this study is to apply several theoretical approaches to explore preeclampsia; specifically those genes directly involved in the pathogenesis. We firstly evaluated the consensus between 12 prioritization strategies to early recognize pathogenic genes related to preeclampsia. A communality analysis in the protein-protein interaction network of previously selected genes was done including further enrichment analysis. The enrichment analysis includes metabolic pathways as well as gene ontology. Microarray data was also collected and used in order to confirm our results or as a strategy to weight the previously enriched pathways. The consensus prioritized gene list was rationally filtered to 476 genes using several criteria. The communality analysis showed an enrichment of communities connected with VEGF-signaling pathway. This pathway is also enriched considering the microarray data. Our result point to VEGF, FLT1 and KDR as relevant pathogenic genes, as well as those connected with NO metabolism. Our results revealed that consensus strategy improve the detection and initial enrichment of pathogenic genes, at least in preeclampsia condition. Moreover the combination of the first percent of the prioritized genes with protein-protein interaction network followed by communality analysis reduces the gene space. This approach actually identifies well known genes related with pathogenesis. However, genes like HSP90, PAK2, CD247 and others included in the first 1% of the prioritized list need to be further explored in preeclampsia pathogenesis through experimental approaches.
Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume
Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin
2016-01-01
Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions. PMID:27598164
Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume.
Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin
2016-09-02
Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions.
NASA Astrophysics Data System (ADS)
Zwanenburg, Philip; Nadarajah, Siva
2016-02-01
The aim of this paper is to demonstrate the equivalence between filtered Discontinuous Galerkin (DG) schemes and the Energy Stable Flux Reconstruction (ESFR) schemes, expanding on previous demonstrations in 1D [1] and for straight-sided elements in 3D [2]. We first derive the DG and ESFR schemes in strong form and compare the respective flux penalization terms while highlighting the implications of the fundamental assumptions for stability in the ESFR formulations, notably that all ESFR scheme correction fields can be interpreted as modally filtered DG correction fields. We present the result in the general context of all higher dimensional curvilinear element formulations. Through a demonstration that there exists a weak form of the ESFR schemes which is both discretely and analytically equivalent to the strong form, we then extend the results obtained for the strong formulations to demonstrate that ESFR schemes can be interpreted as a DG scheme in weak form where discontinuous edge flux is substituted for numerical edge flux correction. Theoretical derivations are then verified with numerical results obtained from a 2D Euler testcase with curved boundaries. Given the current choice of high-order DG-type schemes and the question as to which might be best to use for a specific application, the main significance of this work is the bridge that it provides between them. Clearly outlining the similarities between the schemes results in the important conclusion that it is always less efficient to use ESFR schemes, as opposed to the weak DG scheme, when solving problems implicitly.
Marc J. Stern; S. Andrew Predmore; Michael J. Mortimer; David N. Seesholtz
2010-01-01
We conducted an online survey (n = 3321) followed by five focus groups with Forest Service employees involved in compliance with the National Environmental Policy Act (NEPA) to explore agency views of how NEPA should be implemented within the agency. We filter these perceptions through the lenses of different functional groups within the agency, each with its own role...
Tailoring noise frequency spectrum to improve NIR determinations.
Xie, Shaofei; Xiang, Bingren; Yu, Liyan; Deng, Haishan
2009-12-15
Near infrared spectroscopy (NIR) contains excessive background noise and weak analytical signals caused by near infrared overtones and combinations. That makes it difficult to achieve quantitative determinations of low concentration samples by NIR. A simple chemometric approach has been established to modify the noise frequency spectrum to improve NIR determinations. The proposed method is to multiply one Savitzky-Golay filtered NIR spectrum with another reference spectrum added with thermal noises before the other Savitzky-Golay filter. Since Savitzky-Golay filter is a kind of low-pass filter and cannot eliminate low frequency components of NIR spectrum, using one step or two consecutive Savitzky-Golay filter procedures cannot improve the determination of NIR greatly. Meanwhile, significant improvement is achieved via the Savitzky-Golay filtered NIR spectrum processed with the multiplication alteration before the other Savitzky-Golay filter. The frequency range of the modified noise spectrum shifts toward higher frequency regime via multiplication operation. So the second Savitzky-Golay filter is able to provide better filtering efficiency to obtain satisfied result. The improvement of NIR determination with tailoring noise frequency spectrum technique was demonstrated by both simulated dataset and two measured NIR spectral datasets. It is expected that noise frequency spectrum technique will be adopted mostly in applications where quantitative determination of low concentration sample is crucial.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-04-01
We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
Iranian-Venezuelan Relations and Impacts on the United States
2012-12-01
leader, (5) charismatic leadership, (6) direct linkage between the leader and the masses, (7) the use of anti- intellectual rhetoric, (8) an...Consensus, 184–185 16 between lower and middle classes, who form the masses. Finally, there is always a feeling against representative democracy...proving that reformist ideas were largely supported by the Iranian public. Feeling this popular support and the temporary weakness of conservatives
The use of Delphi and Nominal Group Technique in nursing education: A review.
Foth, Thomas; Efstathiou, Nikolaos; Vanderspank-Wright, Brandi; Ufholz, Lee-Anne; Dütthorn, Nadin; Zimansky, Manuel; Humphrey-Murto, Susan
2016-08-01
Consensus methods are used by healthcare professionals and educators within nursing education because of their presumed capacity to extract the profession's' "collective knowledge" which is often considered tacit knowledge that is difficult to verbalize and to formalize. Since their emergence, consensus methods have been criticized and their rigour has been questioned. Our study focuses on the use of consensus methods in nursing education and seeks to explore how extensively consensus methods are used, the types of consensus methods employed, the purpose of the research and how standardized the application of the methods is. A systematic approach was employed to identify articles reporting the use of consensus methods in nursing education. The search strategy included keyword search in five electronic databases [Medline (Ovid), Embase (Ovid), AMED (Ovid), ERIC (Ovid) and CINAHL (EBSCO)] for the period 2004-2014. We included articles published in English, French, German and Greek discussing the use of consensus methods in nursing education or in the context of identifying competencies. A standardized extraction form was developed using an iterative process with results from the search. General descriptors such as type of journal, nursing speciality, type of educational issue addressed, method used, geographic scope were recorded. Features reflecting methodology such as number, selection and composition of panel participants, number of rounds, response rates, definition of consensus, and feedback were recorded. 1230 articles were screened resulting in 101 included studies. The Delphi was used in 88.2% of studies. Most were reported in nursing journals (63.4%). The most common purpose to use these methods was defining competencies, curriculum development and renewal, and assessment. Remarkably, both standardization and reporting of consensus methods was noted to be generally poor. Areas where the methodology appeared weak included: preparation of the initial questionnaire; the selection and description of participants; number of rounds and number of participants remaining after each round; formal feedback of group ratings; definitions of consensus and a priori definition of numbers of rounds; and modifications to the methodology. The findings of this study are concerning if interpreted within the context of the structural critiques because our findings lend support to these critiques. If consensus methods should continue being used to inform best practices in nursing education, they must be rigorous in design. Copyright © 2016 Elsevier Ltd. All rights reserved.
A moving hum filter to suppress rotor noise in high-resolution airborne magnetic data
Xia, J.; Doll, W.E.; Miller, R.D.; Gamey, T.J.; Emond, A.M.
2005-01-01
A unique filtering approach is developed to eliminate helicopter rotor noise. It is designed to suppress harmonic noise from a rotor that varies slightly in amplitude, phase, and frequency and that contaminates aero-magnetic data. The filter provides a powerful harmonic noise-suppression tool for data acquired with modern large-dynamic-range recording systems. This three-step approach - polynomial fitting, bandpass filtering, and rotor-noise synthesis - significantly reduces rotor noise without altering the spectra of signals of interest. Two steps before hum filtering - polynomial fitting and bandpass filtering - are critical to accurately model the weak rotor noise. During rotor-noise synthesis, amplitude, phase, and frequency are determined. Data are processed segment by segment so that there is no limit on the length of data. The segment length changes dynamically along a line based on modeling results. Modeling the rotor noise is stable and efficient. Real-world data examples demonstrate that this method can suppress rotor noise by more than 95% when implemented in an aeromagnetic data-processing flow. ?? 2005 Society of Exploration Geophysicists. All rights reserved.
NASA Astrophysics Data System (ADS)
Belyaev, B. A.; Serzhantov, A. M.; Bal'va, Ya. F.; Leksikov, An. A.; Galeev, R. G.
2015-05-01
A microstrip bandpass filter of new design based on original resonators with an interdigital structure of conductors has been studied. The proposed filters of third to sixth order are distinguished for their high frequency-selective properties and much smaller size than analogs. It is established that a broad stop band, extending up to a sixfold central bandpass frequency, is determined by low unloaded Q of higher resonance mode and weak coupling of resonators in the pass band. It is shown for the first time that, as the spacing of interdigital stripe conductors decreases, the Q of higher resonance mode monotonically drops, while the Q value for the first operating mode remains high. A prototype fourth-order filter with a central frequency of 0.9 GHz manufactured on a ceramic substrate with dielectric permittivity ɛ = 80 has microstrip topology dimensions of 9.5 × 4.6 × 1 mm3. The electrodynamic 3D model simulations of the filter characteristics agree well with the results of measurements.
Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing
2015-01-01
This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264
Kos, Sebastian; Huegli, Rolf; Hofmann, Eugen; Quick, Harald H; Kuehl, Hilmar; Aker, Stephanie; Kaiser, Gernot M; Borm, Paul J A; Jacob, Augustinus L; Bilecen, Deniz
2009-05-01
The purpose of this study was to demonstrate feasibility of percutaneous transluminal aortic stenting and cava filter placement under magnetic resonance imaging (MRI) guidance exclusively using a polyetheretherketone (PEEK)-based MRI-compatible guidewire. Percutaneous transluminal aortic stenting and cava filter placement were performed in 3 domestic swine. Procedures were performed under MRI-guidance in an open-bore 1.5-T scanner. The applied 0.035-inch guidewire has a PEEK core reinforced by fibres, floppy tip, hydrophilic coating, and paramagnetic markings for passive visualization. Through an 11F sheath, the guidewire was advanced into the abdominal (swine 1) or thoracic aorta (swine 2), and the stents were deployed. The guidewire was advanced into the inferior vena cava (swine 3), and the cava filter was deployed. Postmortem autopsy was performed. Procedural success, guidewire visibility, pushability, and stent support were qualitatively assessed by consensus. Procedure times were documented. Guidewire guidance into the abdominal and thoracic aortas and the inferior vena cava was successful. Stent deployments were successful in the abdominal (swine 1) and thoracic (swine 2) segments of the descending aorta. Cava filter positioning and deployment was successful. Autopsy documented good stent and filter positioning. Guidewire visibility through applied markers was rated acceptable for aortic stenting and good for venous filter placement. Steerability, pushability, and device support were good. The PEEK-based guidewire allows either percutaneous MRI-guided aortic stenting in the thoracic and abdominal segments of the descending aorta and filter placement in the inferior vena cava with acceptable to good device visibility and offers good steerability, pushability, and device support.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kos, Sebastian, E-mail: skos@gmx.d; Huegli, Rolf; Hofmann, Eugen
The purpose of this study was to demonstrate feasibility of percutaneous transluminal aortic stenting and cava filter placement under magnetic resonance imaging (MRI) guidance exclusively using a polyetheretherketone (PEEK)-based MRI-compatible guidewire. Percutaneous transluminal aortic stenting and cava filter placement were performed in 3 domestic swine. Procedures were performed under MRI-guidance in an open-bore 1.5-T scanner. The applied 0.035-inch guidewire has a PEEK core reinforced by fibres, floppy tip, hydrophilic coating, and paramagnetic markings for passive visualization. Through an 11F sheath, the guidewire was advanced into the abdominal (swine 1) or thoracic aorta (swine 2), and the stents were deployed. Themore » guidewire was advanced into the inferior vena cava (swine 3), and the cava filter was deployed. Postmortem autopsy was performed. Procedural success, guidewire visibility, pushability, and stent support were qualitatively assessed by consensus. Procedure times were documented. Guidewire guidance into the abdominal and thoracic aortas and the inferior vena cava was successful. Stent deployments were successful in the abdominal (swine 1) and thoracic (swine 2) segments of the descending aorta. Cava filter positioning and deployment was successful. Autopsy documented good stent and filter positioning. Guidewire visibility through applied markers was rated acceptable for aortic stenting and good for venous filter placement. Steerability, pushability, and device support were good. The PEEK-based guidewire allows either percutaneous MRI-guided aortic stenting in the thoracic and abdominal segments of the descending aorta and filter placement in the inferior vena cava with acceptable to good device visibility and offers good steerability, pushability, and device support.« less
Tunneling Time and Weak Measurement in Strong Field Ionization.
Zimmermann, Tomáš; Mishra, Siddhartha; Doran, Brent R; Gordon, Daniel F; Landsman, Alexandra S
2016-06-10
Tunneling delays represent a hotly debated topic, with many conflicting definitions and little consensus on when and if such definitions accurately describe the physical observables. Here, we relate these different definitions to distinct experimental observables in strong field ionization, finding that two definitions, Larmor time and Bohmian time, are compatible with the attoclock observable and the resonance lifetime of a bound state, respectively. Both of these definitions are closely connected to the theory of weak measurement, with Larmor time being the weak measurement value of tunneling time and Bohmian trajectory corresponding to the average particle trajectory, which has been recently reconstructed using weak measurement in a two-slit experiment [S. Kocsis, B. Braverman, S. Ravets, M. J. Stevens, R. P. Mirin, L. K. Shalm, and A. M. Steinberg, Science 332, 1170 (2011)]. We demonstrate a big discrepancy in strong field ionization between the Bohmian and weak measurement values of tunneling time, and we suggest this arises because the tunneling time is calculated for a small probability postselected ensemble of electrons. Our results have important implications for the interpretation of experiments in attosecond science, suggesting that tunneling is unlikely to be an instantaneous process.
Review of the role of probiotics in gastrointestinal diseases in adults.
Sebastián Domingo, Juan José
Probiotics may act as biological agents that modify the intestinal microbiota and certain cytokine profiles, which can lead to an improvement in certain gastrointestinal diseases. To conduct a review of the evidence of the role of probiotics in certain gastrointestinal diseases in adults. Review conducted using appropriate descriptors, filters and limits in the PubMed database (MEDLINE). The MeSH terms used were Probiotics [in the title] AND Gastrointestinal Diseases, with the following limits or filters: Types of study: Systematic Reviews, Meta-Analysis, Guideline, Practice Guideline, Consensus Development Conference (and Consensus Development Conference NIH), Randomized Controlled Trial, Controlled Clinical Trial and Clinical Trial; age: adults (19 or older); language: English and Spanish; in humans, and with at least one abstract. Full texts of all the Systematic Reviews and meta-analyses directly related to the review's objective were obtained, as well as the Randomised Controlled Trials of the studies that were considered relevant and of sufficient quality for this review. Certain probiotics, different for each process, have proven to be effective and beneficial in cases of acute infectious diarrhoea, antibiotic-associated diarrhoea, Clostridium difficile-associated diarrhoea, pouchitis and Helicobacter pylori infection eradication. Although some probiotics have not demonstrated any benefit, there are certain gastrointestinal diseases in which the use of probiotics, true biological agents, can be recommended. Copyright © 2017 Elsevier España, S.L.U., AEEH y AEG. All rights reserved.
The Development of a Beta-Gamma Personnel Dosimeter
NASA Astrophysics Data System (ADS)
Tsakeres, Frank Steven
The assessment of absorbed dose in mixed beta and gamma radiation fields is an extremely complex task. For many years, the assessment of the absorbed dose to tissue from the weakly penetrating components of a radiation field (i.e., beta particles, electrons) has been largely ignored. Beta radiation fields are encountered routinely in a nuclear facility and may represent the major radiation component under certain accident or emergency conditions. Many attempts have been made to develop an accurate mixed field personnel dosimeter. However, all of these dosimeters have exhibited numerous response problems which have limited their usefulness for personnel dose assessment. Consequently, the determination of the absorbed dose at the epidermal depth (i.e., 7 mg/cm('2)) has been difficult to measure accurately. The objective of this research project was to design, build, and test a sensitive and accurate personnel dosimeter for mixed field applications. The selection of the various dosimeter elements were determined by evaluating several types of phosphors, filters, and backscatter materials. After evaluating the various response characteristics of the badge components, a prototype dosimeter, the CHEMM (CaF(,2):Dy Highly Efficient Multiple Element Multiple Filter) personnel dosimeter, was developed and tested at Georgia Tech, Emory University and the National Bureau of Standards. This dosimeter was comprised of four large CaF(,2):Dy (TLD-200) TLD's and a standard LiF (TLD-100) chip. The weakly penetrating and penetrating components of a radiation field were separated using a series of TLD/filter combinations and a new dose assessment algorithm. The large TLD-200 chips, along with a series of tissue-equivalent filters, were used to determine the absorbed dose due to the weakly penetrating radiation while a LiF/filter combination was used to measure the penetrating component. In addition, a new backscatter material was included in the badge design to better simulate a tissue-equivalent response. The CHEMM personnel dosimeter performance tests were conducted to simulate actual mixed radiation field environments. This dosimeter provided a high degree of sensitivity with accuracies well within the ANSI recommended performance standards for personnel dosimeters. In addition, it was concluded that the CHEMM dosimetry system provided a practical dosimeter alternative with a higher dose assessment accuracy and measurement sensitivity than the personnel dosimetry systems presently used in the nuclear power industry.
ERIC Educational Resources Information Center
Truett, Carol; And Others
1997-01-01
Provides advice for making school Internet-use guidelines. Outlines responsible proactive use of the Internet for educators and librarians, discusses strengths and weaknesses of Internet blocking software and rating systems, and describes acceptable-use policies (AUP). Lists resources for creating your own AUP, Internet filtering software, and…
Design of HTS filter for GSM-R communication system
NASA Astrophysics Data System (ADS)
Cui, Hongyu; Ji, Laiyun
2018-04-01
High-temperature superconducting materials with its excellent performance have increasingly been valued by industries, especially in the field of electronic information. The superconducting material has almost zero surface resistance, and the filter made of it has the characteristics of low insertion loss, high edge steepness and good out-of-band rejection. It has higher selectivity for the desired signal and thus less interference from adjacent channels Signal interference, and noise reduction coefficient can improve the ability to detect weak signals. This design is suitable for high temperature superconducting filter of GSM-R communication system, which can overcome many shortcomings of the traditional GSM-R. The filter is made of DyBCO, a high temperature superconducting thin film material based on magnesium oxide (MgO) substrate with the dielectric constant of 9.7, the center frequency at 887.5MHz, bandwidth of 5MHz.
Raphael, K G; Santiago, V; Lobbezoo, F
2016-10-01
Inspired by the international consensus on defining and grading of bruxism (Lobbezoo F, Ahlberg J, Glaros AG, Kato T, Koyano K, Lavigne GJ et al. J Oral Rehabil. 2013;40:2), this commentary examines its contribution and underlying assumptions for defining sleep bruxism (SB). The consensus' parsimonious redefinition of bruxism as a behaviour is an advance, but we explore an implied question: might SB be more than behaviour? Behaviours do not inherently require clinical treatment, making the consensus-proposed 'diagnostic grading system' inappropriate. However, diagnostic grading might be useful, if SB were considered a disorder. Therefore, to fully appreciate the contribution of the consensus statement, we first consider standards and evidence for determining whether SB is a disorder characterised by harmful dysfunction or a risk factor increasing probability of a disorder. Second, the strengths and weaknesses of the consensus statement's proposed 'diagnostic grading system' are examined. The strongest evidence-to-date does not support SB as disorder as implied by 'diagnosis'. Behaviour alone is not diagnosed; disorders are. Considered even as a grading system of behaviour, the proposed system is weakened by poor sensitivity of self-report for direct polysomnographic (PSG)-classified SB and poor associations between clinical judgments of SB and portable PSG; reliance on dichotomised reports; and failure to consider SB behaviour on a continuum, measurable and definable through valid behavioural observation. To date, evidence for validity of self-report or clinician report in placing SB behaviour on a continuum is lacking, raising concerns about their potential utility in any bruxism behavioural grading system, and handicapping future study of whether SB may be a useful risk factor for, or itself a disorder requiring treatment. © 2016 John Wiley & Sons Ltd.
Effects of training set selection on pain recognition via facial expressions
NASA Astrophysics Data System (ADS)
Shier, Warren A.; Yanushkevich, Svetlana N.
2016-07-01
This paper presents an approach to pain expression classification based on Gabor energy filters with Support Vector Machines (SVMs), followed by analyzing the effects of training set variations on the systems classification rate. This approach is tested on the UNBC-McMaster Shoulder Pain Archive, which consists of spontaneous pain images, hand labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the subjects pain intensity level has been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters with SVMs provide comparable or better results to previous filter- based pain recognition methods, with precision rates of 74%, 30% and 78% for no pain, weak pain and strong pain, respectively. The study of effects of intra-class skew, or changing the number of images per subject, show that both completely removing and over-representing poor quality subjects in the training set has little effect on the overall accuracy of the system. This result suggests that poor quality subjects could be removed from the training set to save offline training time and that SVM is robust not only to outliers in training data, but also to significant amounts of poor quality data mixed into the training sets.
Lin, Gong-Ru; Cheng, Tzu-Kang; Chi, Yu-Chieh; Lin, Gong-Cheng; Wang, Hai-Lin; Lin, Yi-Hong
2009-09-28
In a weak-resonant-cavity Fabry-Perot laser diode (WRC-FPLD) based DWDM-PON system with an array-waveguide-grating (AWG) channelized amplified spontaneous emission (ASE) source located at remote node, we study the effect of AWG filter bandwidth on the transmission performances of the 1.25-Gbit/s directly modulated WRC-FPLD transmitter under the AWG channelized ASE injection-locking. With AWG filters of two different channel spacings at 50 and 200 GHz, several characteristic parameters such as interfered reflection, relatively intensity noise, crosstalk reduction, side-mode-suppressing ratio and power penalty of BER effect of the WRC-FPLD transmitted data are compared. The 200-GHz AWG filtered ASE injection minimizes the noises of WRC-FPLD based ONU transmitter, improving the power penalty of upstream data by -1.6 dB at BER of 10(-12). In contrast, the 50-GHz AWG channelized ASE injection fails to promote better BER but increases the power penalty by + 1.5 dB under back-to-back transmission. A theoretical modeling elucidates that the BER degradation up to 4 orders of magnitude between two injection cases is mainly attributed to the reduction on ASE injection linewidth, since which concurrently degrades the signal-to-noise and extinction ratios of the transmitted data stream.
NASA Astrophysics Data System (ADS)
Jouvel, S.; Kneib, J.-P.; Bernstein, G.; Ilbert, O.; Jelinsky, P.; Milliard, B.; Ealet, A.; Schimd, C.; Dahlen, T.; Arnouts, S.
2011-08-01
Context. With the discovery of the accelerated expansion of the universe, different observational probes have been proposed to investigate the presence of dark energy, including possible modifications to the gravitation laws by accurately measuring the expansion of the Universe and the growth of structures. We need to optimize the return from future dark energy surveys to obtain the best results from these probes. Aims: A high precision weak-lensing analysis requires not an only accurate measurement of galaxy shapes but also a precise and unbiased measurement of galaxy redshifts. The survey strategy has to be defined following both the photometric redshift and shape measurement accuracy. Methods: We define the key properties of the weak-lensing instrument and compute the effective PSF and the overall throughput and sensitivities. We then investigate the impact of the pixel scale on the sampling of the effective PSF, and place upper limits on the pixel scale. We then define the survey strategy computing the survey area including in particular both the Galactic absorption and Zodiacal light variation accross the sky. Using the Le Phare photometric redshift code and realistic galaxy mock catalog, we investigate the properties of different filter-sets and the importance of the u-band photometry quality to optimize the photometric redshift and the dark energy figure of merit (FoM). Results: Using the predicted photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We find that we can derive the most accurate the photometric redshifts for the bulk of the faint galaxy population when filters have a resolution ℛ ~ 3.2. We show that an optimal mission would survey the sky through eight filters using two cameras (visible and near infrared). Assuming a five-year mission duration, a mirror size of 1.5 m and a 0.5 deg2 FOV with a visible pixel scale of 0.15'', we found that a homogeneous survey reaching a survey population of IAB = 25.6 (10σ) with a sky coverage of ~11 000 deg2 maximizes the weak lensing FoM. The effective number density of galaxies used for WL is then ~45 gal/arcmin2, which is at least a factor of two higher than ground-based surveys. Conclusions: This study demonstrates that a full account of the observational strategy is required to properly optimize the instrument parameters and maximize the FoM of the future weak-lensing space dark energy mission.
Coordination of networked systems on digraphs with multiple leaders via pinning control
NASA Astrophysics Data System (ADS)
Chen, Gang; Lewis, Frank L.
2012-02-01
It is well known that achieving consensus among a group of multi-vehicle systems by local distributed control is feasible if and only if all nodes in the communication digraph are reachable from a single (root) node. In this article, we take into account a more general case that the communication digraph of the networked multi-vehicle systems is weakly connected and has two or more zero-in-degree and strongly connected subgraphs, i.e. there are two or more leader groups. Based on the pinning control strategy, the feasibility problem of achieving second-order controlled consensus is studied. At first, a necessary and sufficient condition is given when the topology is fixed. Then the method to design the controller and the rule to choose the pinned vehicles are discussed. The proposed approach allows us to extend several existing results for undirected graphs to directed balanced graphs. A sufficient condition is proposed in the case where the coupling topology is variable. As an illustrative example, a second-order controlled consensus scheme is applied to coordinate the movement of networked multiple mobile robots.
Christina Lyons-Tinsley; David L. Peterson
2012-01-01
Previous studies have debated the flammability of young regenerating stands, especially those in a matrix of mature forest, and no consensus has emerged as to whether young stands are inherently prone to high-severity wildfire. This topic has recently been addressed using spatial imagery, and weak inferences were made given the scale mismatch between the coarse...
[Factors affecting biological removal of iron and manganese in groundwater].
Xue, Gang; He, Sheng-Bing; Wang, Xin-Ze
2006-01-01
Factors affecting biological process for removing iron and manganese in groundwater were analyzed. When DO and pH in groundwater after aeration were 7.0 - 7.5 mg/L and 6.8 - 7.0 respectively, not only can the activation of Mn2+ oxidizing bacteria be maintained, but also the demand of iron and manganese removal can be satisfied. A novel inoculating approach of grafting mature filter material into filter bed, which is easier to handle than selective culture media, was employed in this research. However, this approach was only suitable to the filter material of high-quality manganese sand with strong Mn2+ adsorption capacity. For the filter material of quartz sand with weak adsorption capacity, only culturing and domesticating Mn2+ oxidizing bacteria by selective culture media can be adopted as inoculation in filter bed. The optimal backwashing rate of biological filter bed filled with manganese sand and quartz sand should be kept at a relatively low level of 6 - 9 L/(m2 x s) and 7 -11 L/( m2 x s), respectively. Then the stability of microbial phase in filter bed was not disturbed, and iron and manganese removal efficiency recovered in less than 5h. Moreover, by using filter material with uniform particle size of 1.0 - 1.2 mm in filter bed, the filtration cycle reached as long as 35 - 38h.
Consensus Prediction of Charged Single Alpha-Helices with CSAHserver.
Dudola, Dániel; Tóth, Gábor; Nyitray, László; Gáspári, Zoltán
2017-01-01
Charged single alpha-helices (CSAHs) constitute a rare structural motif. CSAH is characterized by a high density of regularly alternating residues with positively and negatively charged side chains. Such segments exhibit unique structural properties; however, there are only a handful of proteins where its existence is experimentally verified. Therefore, establishing a pipeline that is capable of predicting the presence of CSAH segments with a low false positive rate is of considerable importance. Here we describe a consensus-based approach that relies on two conceptually different CSAH detection methods and a final filter based on the estimated helix-forming capabilities of the segments. This pipeline was shown to be capable of identifying previously uncharacterized CSAH segments that could be verified experimentally. The method is available as a web server at http://csahserver.itk.ppke.hu and also a downloadable standalone program suitable to scan larger sequence collections.
Efficient Organometallic Spin Filter between Single-Wall Carbon Nanotube or Graphene Electrodes
NASA Astrophysics Data System (ADS)
Koleini, Mohammad; Paulsson, Magnus; Brandbyge, Mads
2007-05-01
We present a theoretical study of spin transport in a class of molecular systems consisting of an organometallic benzene-vanadium cluster placed in between graphene or single-wall carbon-nanotube-model contacts. Ab initio modeling is performed by combining spin density functional theory and nonequilibrium Green’s function techniques. We consider weak and strong cluster-contact bonds. Depending on the bonding we find from 73% (strong bonds) up to 99% (weak bonds) spin polarization of the electron transmission, and enhanced polarization with increased cluster length.
Design considerations for near-infrared filter photometry: effects of noise sources and selectivity.
Tarumi, Toshiyasu; Amerov, Airat K; Arnold, Mark A; Small, Gary W
2009-06-01
Optimal filter design of two-channel near-infrared filter photometers is investigated for simulated two-component systems consisting of an analyte and a spectrally overlapping interferent. The degree of overlap between the analyte and interferent bands is varied over three levels. The optimal design is obtained for three cases: a source or background flicker noise limited case, a shot noise limited case, and a detector noise limited case. Conventional photometers consist of narrow-band optical filters with their bands located at discrete wavelengths. However, the use of broadband optical filters with overlapping responses has been proposed to obtain as much signal as possible from a weak and broad analyte band typical of near-infrared absorptions. One question regarding the use of broadband optical filters with overlapping responses is the selectivity achieved by such filters. The selectivity of two-channel photometers is evaluated on the basis of the angle between the analyte and interferent vectors in the space spanned by the relative change recorded for each of the two detector channels. This study shows that for the shot noise limited or detector noise limited cases, the slight decrease in selectivity with the use of broadband optical filters can be compensated by the higher signal-to-noise ratio afforded by the use of such filters. For the source noise limited case, the best quantitative results are obtained with the use of narrow-band non-overlapping optical filters.
2012-02-29
couples the estimation scheme with the computational scheme, using one to enhance the other. Numerically, this switching changes several of the matrices...2011. 11. M.A. Demetriou, Enforcing and enhancing consensus of spatially distributed filters utilizing mobile sensor networks, Proceedings of the 49th...expected May, 2012. References [1] J. H. Seinfeld and S. N. Pandis, Atmospheric Chemistry and Physics: From Air Pollution to Climate Change. New York
The Impact of Soviet Ethnicity and Demographic Changes on Soviet Foreign Policy.
1984-03-01
ethnicity, here, in particular ecnomic ones. . :-r be viewed fi rst in the Eurooean areas ano ther- ii Ih non-Ettropean areas of the Soviet Union. The...Since the Soviet Union is essentially a collectie leadership, with fluid coalitions or blocs, creatino consensus for policy formation is the key to power... essentially the history of Russia thro,,oh official Communist filters. Lessons from the nast are applied to the present, whether or not avpropriate in context
Lu, Huanhuan; Wang, Fuzhong; Zhang, Huichun
2016-04-01
Traditional speech detection methods regard the noise as a jamming signal to filter,but under the strong noise background,these methods lost part of the original speech signal while eliminating noise.Stochastic resonance can use noise energy to amplify the weak signal and suppress the noise.According to stochastic resonance theory,a new method based on adaptive stochastic resonance to extract weak speech signals is proposed.This method,combined with twice sampling,realizes the detection of weak speech signals from strong noise.The parameters of the systema,b are adjusted adaptively by evaluating the signal-to-noise ratio of the output signal,and then the weak speech signal is optimally detected.Experimental simulation analysis showed that under the background of strong noise,the output signal-to-noise ratio increased from the initial value-7dB to about 0.86 dB,with the gain of signalto-noise ratio is 7.86 dB.This method obviously raises the signal-to-noise ratio of the output speech signals,which gives a new idea to detect the weak speech signals in strong noise environment.
Iteration of ultrasound aberration correction methods
NASA Astrophysics Data System (ADS)
Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond
2004-05-01
Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.
Campos, Fernanda Magalhães Freire; Repoles, Laura Cotta; de Araújo, Fernanda Fortes; Peruhype-Magalhães, Vanessa; Xavier, Marcelo Antônio Pascoal; Sabino, Ester Cerdeira; de Freitas Carneiro Proietti, Anna Bárbara; Andrade, Mariléia Chaves; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis; Gontijo, Célia Maria Ferreira
2018-04-01
A relevant issue in Chagas disease serological diagnosis regards the requirement of using several confirmatory methods to elucidate the status of non-negative results from blood bank screening. The development of a single reliable method may potentially contribute to distinguish true and false positive results. Our aim was to evaluate the performance of the multiplexed flow-cytometry anti-T. cruzi/Leishmania IgG1 serology/(FC-TRIPLEX Chagas/Leish IgG1) with three conventional confirmatory criteria (ELISA-EIA, Immunofluorescence assay-IIF and EIA/IIF consensus criterion) to define the final status of samples with actual/previous non-negative results during anti-T. cruzi ELISA-screening in blood banks. Apart from inconclusive results, the FC-TRIPLEX presented a weak agreement index with EIA, while a strong agreement was observed when either IIF or EIA/IIF consensus criteria were applied. Discriminant analysis and Spearman's correlation further corroborates the agreement scores. ROC curve analysis showed that FC-TRIPLEX performance indexes were higher when IIF and EIA/IIF consensus were used as a confirmatory criterion. Logistic regression analysis further demonstrated that the probability of FC-TRIPLEX to yield positive results was higher for inconclusive results from IIF and EIA/IIF consensus. Machine learning tools illustrated the high level of categorical agreement between FC-TRIPLEX versus IIF or EIA/IIF consensus. Together, these findings demonstrated the usefulness of FC-TRIPLEX as a tool to elucidate the status of non-negative results in blood bank screening of Chagas disease. Copyright © 2018. Published by Elsevier B.V.
High-Resolution Infrared Filter System for Solar Spectroscopy and Polarimetry
NASA Astrophysics Data System (ADS)
Cao, W.; Ma, J.; Wang, J.; Goode, P. R.; Wang, H.; Denker, C.
2003-05-01
We report on the design of an imaging filter system working at the near infrared (NIR) of 1.56 μ m to obtain monochromatic images and to probe weak magnetic fields in different layers of the deep photosphere with high temporal resolution and spatial resolution at Big Bear Solar Observatory (BBSO). This filter system consists of an interference filter, a birefringent filter, and a Fabry-Pérot etalon. As the narrowest filter system, the infrared Fabry-Pérot plays an important role in achieving narrow band transmission and high throughput, maintaining wavelength tuning ability, and assuring stability and reliability. In this poster, we outline a set of methods for the evaluation and calibration of the near infrared Fabry-Pérot etalon. Two-dimensional characteristic maps of the near infrared Fabry-Pérot etalon, including full-width-at-half-maximum (FWHM), effective finesse, peak transmission, along with free spectral range, flatness, roughness, stability and repeatability were obtained with lab equipments. Finally, by utilizing these results, a detailed analysis of the filter performance for the Fe I 1.5648 μ m and Fe I 1.5652 μ m Zeeman sensitive lines is presented. These results will benefit the design of NIR spectro-polarimeter of Advanced Technology Solar Telescope (ATST).
Fixing the Leak: Empirical Corrections for the Small Light Leak in Hinode XRT
NASA Astrophysics Data System (ADS)
Saar, Steven H.; DeLuca, E. E.; McCauley, P.; Kobelski, A.
2013-07-01
On May 9, 2012, the the straylight level of XRT on Hinode suddenly increased, consistent with the appearance of a pinhole in the entrance filter (possibly a micrometeorite breach). The effect of this event is most noticeable in the optical G band data, which shows an average light excess of ~30%. However, data in several of the X-ray filters is also affected, due to low sensitivity "tails" of their filter responses into the visible. Observations taken with the G band filter but with the visible light shutter (VLS) closed show a weak, slightly shifted, out-of-focus image, revealing the leaked light. The intensity of the leak depends on telescope pointing, dropping strongly for images taken off-disk. By monitoring light levels in the corners of full-Sun Ti-poly filter images, we determine the approximate time of the event: ~13:30 UT. We use pairs of images taken just-before and after the filter breach to directly measure the leakage in two affected X-ray filters. We then develop a model using a scaled, shifted, and smoothed versions of the VLS closed images to remove the contamination. We estimate the uncertainties involved in our proposed correction procedure. This research was supported under NASA contract NNM07AB07C for Hinode XRT.
An OTA-C filter for ECG acquisition systems with highly linear range and less passband attenuation
NASA Astrophysics Data System (ADS)
Jihai, Duan; Chuang, Lan; Weilin, Xu; Baolin, Wei
2015-05-01
A fifth order operational transconductance amplifier-C (OTA-C) Butterworth type low-pass filter with highly linear range and less passband attenuation is presented for wearable bio-telemetry monitoring applications in a UWB wireless body area network. The source degeneration structure applied in typical small transconductance circuit is improved to provide a highly linear range for the OTA-C filter. Moreover, to reduce the passband attenuation of the filter, a cascode structure is employed as the output stage of the OTA. The OTA-based circuit is operated in weak inversion due to strict power limitation in the biomedical chip. The filter is fabricated in a SMIC 0.18-μm CMOS process. The measured results for the filter have shown a passband gain of -6.2 dB, while the -3-dB frequency is around 276 Hz. For the 0.8 VPP sinusoidal input at 100 Hz, a total harmonic distortion (THD) of -56.8 dB is obtained. An electrocardiogram signal with noise interference is fed into this chip to validate the function of the designed filter. Project supported by the National Natural Science Foundation of China (Nos. 61161003, 61264001, 61166004) and the Guangxi Natural Science Foundation (No. 2013GXNSFAA019333).
Signal conditioning units for vibration measurement in HUMS
NASA Astrophysics Data System (ADS)
Wu, Kaizhi; Liu, Tingting; Yu, Zirong; Chen, Lijuan; Huang, Xinjie
2018-03-01
A signal conditioning units for vibration measurement in HUMS is proposed in the paper. Due to the frequency of vibrations caused by components in helicopter are different, two steps amplifier and programmable anti-aliasing filter are designed to meet the measurement of different types of helicopter. Vibration signals are converted into measurable electrical signals combing with ICP driver firstly. Then pre-amplifier and programmable gain amplifier is applied to magnify the weak electrical signals. In addition, programmable anti-aliasing filter is utilized to filter the interference of noise. The units were tested using function signal generator and oscilloscope. The experimental results have demonstrated the effectiveness of our proposed method in quantitatively and qualitatively. The method presented in this paper can meet the measurement requirement for different types of helicopter.
Electrodynamic study of YIG filters and resonators
Krupka, Jerzy; Salski, Bartlomiej; Kopyt, Pawel; Gwarek, Wojciech
2016-01-01
Numerical solutions of coupled Maxwell and Landau-Lifshitz-Gilbert equations for a magnetized yttrium iron garnet (YIG) sphere acting as a one-stage filter are presented. The filter is analysed using finite-difference time-domain technique. Contrary to the state of the art, the study shows that the maximum electromagnetic power transmission through the YIG filter occurs at the frequency of the magnetic plasmon resonance with the effective permeability of the gyromagnetic medium μr ≈ −2, and not at a ferromagnetic resonance frequency. Such a new understanding of the YIG filter operation, makes it one of the most commonly used single-negative plasmonic metamaterials. The frequency of maximum transmission is also found to weakly depend on the size of the YIG sphere. An analytic electromagnetic analysis of resonances in a YIG sphere is performed for circularly polarized electromagnetic fields. The YIG sphere is situated in a free space and in a large spherical cavity. The study demonstrates that both volume resonances and magnetic plasmon resonances can be solutions of the same transcendental equations. PMID:27698467
Pham, Quang Duc; Kusumi, Yuichi; Hasegawa, Satoshi; Hayasaki, Yoshio
2012-10-01
We propose a new method for three-dimensional (3D) position measurement of nanoparticles using an in-line digital holographic microscope. The method improves the signal-to-noise ratio of the amplitude of the interference fringes to achieve higher accuracy in the position measurement by increasing weak scattered light from a nanoparticle relative to the reference light by using a low spatial frequency attenuation filter. We demonstrated the improvements of signal-to-noise ratio of the optical system and contrast of the interference fringes, allowing the 3D positions of nanoparticles to be determined more precisely.
Morphological operators for enhanced polarimetric image target detection
NASA Astrophysics Data System (ADS)
Romano, João. M.; Rosario, Dalton S.
2015-09-01
We introduce an algorithm based on morphological filters with the Stokes parameters that augments the daytime and nighttime detection of weak-signal manmade objects immersed in a predominant natural background scene. The approach features a tailored sequence of signal-enhancing filters, consisting of core morphological operators (dilation, erosion) and higher level morphological operations (e.g., spatial gradient, opening, closing) to achieve a desired overarching goal. Using representative data from the SPICE database, the results show that the approach was able to automatically and persistently detect with a high confidence level the presence of three mobile military howitzer surrogates (targets) in natural clutter.
NASA Astrophysics Data System (ADS)
Abdalla, F. B.; Amara, A.; Capak, P.; Cypriano, E. S.; Lahav, O.; Rhodes, J.
2008-07-01
We study in detail the photometric redshift requirements needed for tomographic weak gravitational lensing in order to measure accurately the dark energy equation of state. In particular, we examine how ground-based photometry (u, g, r, i, z, y) can be complemented by space-based near-infrared (near-IR) photometry (J, H), e.g. onboard the planned DUNE satellite. Using realistic photometric redshift simulations and an artificial neural network photo-z method we evaluate the figure of merit for the dark energy parameters (w0, wa). We consider a DUNE-like broad optical filter supplemented with ground-based multiband optical data from surveys like the Dark Energy Survey, Pan-STARRS and LSST. We show that the dark energy figure of merit would be improved by a factor of 1.3-1.7 if IR filters are added onboard DUNE. Furthermore we show that with IR data catastrophic photo-z outliers can be removed effectively. There is an interplay between the choice of filters, the magnitude limits and the removal of outliers. We draw attention to the dependence of the results on the galaxy formation scenarios encoded into the mock galaxies, e.g. the galaxy reddening. For example, very deep u-band data could be as effective as the IR. We also find that about 105-106 spectroscopic redshifts are needed for calibration of the full survey.
A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement
NASA Astrophysics Data System (ADS)
Sun, Hui; Liu, Ji-Gou
2018-07-01
This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.
Weakly-tunable transmon qubits in a multi-qubit architecture
NASA Astrophysics Data System (ADS)
Hertzberg, Jared; Bronn, Nicholas; Corcoles, Antonio; Brink, Markus; Keefe, George; Takita, Maika; Hutchings, M.; Plourde, B. L. T.; Gambetta, Jay; Chow, Jerry
Quantum error-correction employing a 2D lattice of qubits requires a strong coupling between adjacent qubits and consistently high gate fidelity among them. In such a system, all-microwave cross-resonance gates offer simplicity of setup and operation. However, the relative frequencies of adjacent qubits must be carefully arranged in order to optimize gate rates and eliminate unwanted couplings. We discuss the incorporation of weakly-flux-tunable transmon qubits into such an architecture. Using DC tuning through filtered flux-bias lines, we adjust qubit frequencies while minimizing the effects of flux noise on decoherence.
Weak decays of heavy hadrons into dynamically generated resonances
Oset, Eulogio; Liang, Wei -Hong; Bayar, Melahat; ...
2016-01-28
In this study, we present a review of recent works on weak decay of heavy mesons and baryons with two mesons, or a meson and a baryon, interacting strongly in the final state. The aim is to learn about the interaction of hadrons and how some particular resonances are produced in the reactions. It is shown that these reactions have peculiar features and act as filters for some quantum numbers which allow to identify easily some resonances and learn about their nature. The combination of basic elements of the weak interaction with the framework of the chiral unitary approach allowmore » for an interpretation of results of many reactions and add a novel information to different aspects of the hadron interaction and the properties of dynamically generated resonances.« less
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
Expert system constant false alarm rate processor
NASA Astrophysics Data System (ADS)
Baldygo, William J., Jr.; Wicks, Michael C.
1993-10-01
The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.
Mantovani, Luciana; Tribaudino, Mario; Solzi, Massimo; Barraco, Vera; De Munari, Eriberto; Pironi, Claudia
2018-08-01
In this work, both PM 10 filters and leaves have been collected, on a daily basis, over a period of five months and compared systematically. Filters were taken from an air-quality monitoring station and leaves from two Tilia cordata trees, both located near the railway station of Parma. SEM-EDS analysis on the surface and across the leaves shows that magnetic particles are almost entirely made of magnetite, and that they are found invariably on the leaves surface. The saturation isothermal magnetic remanence (SIRM) shows that for both filters and leaves the magnetic fraction mainly consists of a low coercivity, magnetite-like phase. The magnetic signals of filter and leaves and atmospheric PM concentrations are compared. The correlation is better for filters, mostly with parameters related to vehicular pollution, and improved for both filters and leaves once data were averaged on a 10 days basis. Filters and leaves equally show an increase in magnetic signal during the fall-winter period together with PM 10 content. The comparison between leaves and filters shows that: 1) leaves give a qualitative picture, and in our case they could be used as environmental proxies after averaging the results over multiple days; 2) the correlation with PM 10 is weaker, indicating that there is a PM 10 contribution from non-magnetic particles, like calcite and clay minerals, pollen and spores; 3) multidomain particles contribution from filters indicates a strong relation with vehicular polluters, suggesting the important role of larger particles; 4) magnetization from leaves and filters are weakly related, due to the different sampling lapse. Copyright © 2018 Elsevier Ltd. All rights reserved.
Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.
Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971
Pulsipher, Michael A.; Skinner, Roderick; McDonald, George B.; Hingorani, Sangeeta; Armenian, Saro H.; Cooke, Kenneth R.; Gracia, Clarisa; Petryk, Anna; Bhatia, Smita; Bunin, Nancy; Nieder, Michael L.; Dvorak, Christopher C.; Sung, Lillian; Sanders, Jean E.; Kurtzberg, Joanne; Baker, K. Scott
2012-01-01
Existing standards for screening and management of late effects occurring in children who have undergone hematopoietic cell transplantation (HCT) include recommendations from pediatric cancer networks and consensus guidelines from adult-oriented transplantation societies applicable to all recipients of HCT. While these approaches have significant merit, they are not pediatric-HCT focused and they do not address post-HCT challenges faced by children with complex non-malignant disorders. In this article we discuss the strengths and weaknesses of current published recommendations and conclude that pediatric-specific guidelines for post-HCT screening and management would be beneficial to the long-term health of these patients and would promote late-effects research in this field. Our panel of late effects experts also provides recommendations for follow up and therapy of selected post-HCT organ and endocrine complications in pediatric patients. PMID:22248713
Alahnomi, Rammah A; Zakaria, Z; Ruslan, E; Ab Rashid, S R; Mohd Bahar, Amyrul Azuan; Shaaban, Azizah
2017-01-01
A novel symmetrical split ring resonator (SSRR) based microwave sensor with spurline filters for detecting and characterizing the properties of solid materials has been developed. Due to the weak perturbation in the interaction of material under test (MUT) and planar microwave sensor, spurline filters were embedded to the SSRR microwave sensor which effectively enhanced Q-factor with suppressing the undesired harmonic frequency. The spurline filter structures force the presented sensor to resonate at a fundamental frequency of 2.2 GHz with the capabilities of suppressing rejected harmonic frequency and miniaturization in circuit size. A wide bandwidth rejection is achieved by using double spurlines filters with high Q-factor achievement (up to 652.94) compared to single spurline filter. The new SSRR sensor with spurline filters displayed desired properties such as high sensitivity, accuracy, and performance with a 1.3% typical percentage error in the measurement results. Furthermore, the sensor has been successfully applied for detecting and characterizing solid materials (such as Roger 5880, Roger 4350, and FR4) and evidently demonstrated that it can suppress the harmonic frequency effectively. This novel design with harmonic suppression is useful for various applications such as food industry (meat, fruit, vegetables), biological medicine (derived from proteins and other substances produced by the body), and Therapeutic goods (antiseptics, vitamins, anti-psychotics, and other medicines).
Binocular contrast-gain control for natural scenes: Image structure and phase alignment.
Huang, Pi-Chun; Dai, Yu-Ming
2018-05-01
In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.
Impacts of backwashing on granular activated carbon filters for advanced wastewater treatment.
Frank, Joshua; Ruhl, Aki Sebastian; Jekel, Martin
2015-12-15
The use of granular activated carbon (GAC) in fixed bed filters is a promising option for the removal of organic micropollutants (OMP) from wastewater treatment plant effluents. Frequent backwashing of the filter bed is inevitable, but its effect on potential filter stratification is not well understood yet and thus has been evaluated in the present study for two commercial GAC products. Backwashing of GAC filters was simulated with 10 or 100 filter bed expansions of 20 or 100% at backwash velocities of 12 and 40 m/h, respectively. Five vertical fractions were extracted and revealed a vertical stratification according to grain sizes and material densities. Sieve analyses indicated increasing grain sizes towards the bottom for one GAC while grain sizes of the other GAC were more homogeneously distributed throughout the filter bed. The apparent densities of the top sections were significantly lower than that of the bottom sections of both products. Comparative long term fixed bed adsorption experiments with the top and bottom sections of the stratified GAC showed remarkable differences in breakthrough curves of dissolved organic carbon, UV light absorption at 254 nm wavelength (UVA254) and OMP. GAC from the upper section showed constantly better removal efficiencies than GAC from the bottom section, especially for weakly adsorbing OMP such as sulfamethoxazole. Furthermore correlations between UVA254 reductions and OMP removals were found. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ruslan, E.; Ab Rashid, S. R.; Mohd Bahar, Amyrul Azuan; Shaaban, Azizah
2017-01-01
A novel symmetrical split ring resonator (SSRR) based microwave sensor with spurline filters for detecting and characterizing the properties of solid materials has been developed. Due to the weak perturbation in the interaction of material under test (MUT) and planar microwave sensor, spurline filters were embedded to the SSRR microwave sensor which effectively enhanced Q-factor with suppressing the undesired harmonic frequency. The spurline filter structures force the presented sensor to resonate at a fundamental frequency of 2.2 GHz with the capabilities of suppressing rejected harmonic frequency and miniaturization in circuit size. A wide bandwidth rejection is achieved by using double spurlines filters with high Q-factor achievement (up to 652.94) compared to single spurline filter. The new SSRR sensor with spurline filters displayed desired properties such as high sensitivity, accuracy, and performance with a 1.3% typical percentage error in the measurement results. Furthermore, the sensor has been successfully applied for detecting and characterizing solid materials (such as Roger 5880, Roger 4350, and FR4) and evidently demonstrated that it can suppress the harmonic frequency effectively. This novel design with harmonic suppression is useful for various applications such as food industry (meat, fruit, vegetables), biological medicine (derived from proteins and other substances produced by the body), and Therapeutic goods (antiseptics, vitamins, anti-psychotics, and other medicines). PMID:28934301
Diagnosis and Management of Iliac Artery Endofibrosis: Results of a Delphi Consensus Study.
2016-07-01
Iliac endofibrosis is a rare condition that may result in a reduction of blood flow to the lower extremity in young, otherwise healthy individuals. The data to inform everyday clinical management are weak and therefore a Delphi consensus methodology was used to explore areas of consensus and disagreement concerning the diagnosis and management of patients with suspected iliac endofibrosis. A three-round Delphi questionnaire approach was used among vascular surgeons, sports physicians, sports scientists, radiologists, and clinical vascular scientists with experience of treating this condition to explore diagnosis and clinical management issues for patients with suspected iliac artery endofibrosis. Analysis is based on 18 responses to round 2 and 14 responses to round 3, with agreement reported when 70% of respondents were in agreement. Initially there was agreement on the typical symptoms at presentation and the need for an exercise test in the diagnosis. Round 3 clarified that duplex ultrasound was a useful tool in the diagnosis of endofibrosis. There was consensus on the most appropriate type of surgery (endarterectomy and vein patch) and that endovascular interventions were inadvisable. The final round helped to inform aspects of the natural history and post-operative surveillance. Progression of the disease was likely with continued exercise but cessation may prevent progression. Surveillance after surgery is generally recommended yearly with at least a clinical assessment. There is broad agreement about the presenting symptoms and the investigations required to confirm (or exclude) the diagnosis of iliac endofibrosis. There was consensus on the surgical approach to repair. Disagreement existed about the specific diagnostic criteria that should be applied during non-invasive testing and about post-operative care and resumption of exercise. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Pacchiarotti, Isabella; Bond, David J.; Baldessarini, Ross J.; Nolen, Willem A.; Grunze, Heinz; Licht, Rasmus W.; Post, Robert M.; Berk, Michael; Goodwin, Guy M.; Sachs, Gary S.; Tondo, Leonardo; Findling, Robert L.; Youngstrom, Eric A.; Tohen, Mauricio; Undurraga, Juan; González-Pinto, Ana; Goldberg, Joseph F.; Yildiz, Ayşegül; Altshuler, Lori L.; Calabrese, Joseph R.; Mitchell, Philip B.; Thase, Michael E.; Koukopoulos, Athanasios; Colom, Francesc; Frye, Mark A.; Malhi, Gin S.; Fountoulakis, Konstantinos N.; Vázquez, Gustavo; Perlis, Roy H.; Ketter, Terence A.; Cassidy, Frederick; Akiskal, Hagop; Azorin, Jean-Michel; Valentí, Marc; Mazzei, Diego Hidalgo; Lafer, Beny; Kato, Tadafumi; Mazzarini, Lorenzo; Martínez-Aran, Anabel; Parker, Gordon; Souery, Daniel; Özerdem, Ayşegül; McElroy, Susan L.; Girardi, Paolo; Bauer, Michael; Yatham, Lakshmi N.; Zarate, Carlos A.; Nierenberg, Andrew A.; Birmaher, Boris; Kanba, Shigenobu; El-Mallakh, Rif S.; Serretti, Alessandro; Rihmer, Zoltan; Young, Allan H.; Kotzalidis, Georgios D.; MacQueen, Glenda M.; Bowden, Charles L.; Ghaemi, S. Nassir; Lopez-Jaramillo, Carlos; Rybakowski, Janusz; Ha, Kyooseob; Perugi, Giulio; Kasper, Siegfried; Amsterdam, Jay D.; Hirschfeld, Robert M.; Kapczinski, Flávio; Vieta, Eduard
2014-01-01
Objective The risk-benefit profile of antidepressant medications in bipolar disorder is controversial. When conclusive evidence is lacking, expert consensus can guide treatment decisions. The International Society for Bipolar Disorders (ISBD) convened a task force to seek consensus recommendations on the use of antidepressants in bipolar disorders. Method An expert task force iteratively developed consensus through serial consensus-based revisions using the Delphi method. Initial survey items were based on systematic review of the literature. Subsequent surveys included new or reworded items and items that needed to be rerated. This process resulted in the final ISBD Task Force clinical recommendations on antidepressant use in bipolar disorder. Results There is striking incongruity between the wide use of and the weak evidence base for the efficacy and safety of antidepressant drugs in bipolar disorder. Few well-designed, long-term trials of prophylactic benefits have been conducted, and there is insufficient evidence for treatment benefits with antidepressants combined with mood stabilizers. A major concern is the risk for mood switch to hypomania, mania, and mixed states. Integrating the evidence and the experience of the task force members, a consensus was reached on 12 statements on the use of antidepressants in bipolar disorder. Conclusions Because of limited data, the task force could not make broad statements endorsing antidepressant use but acknowledged that individual bipolar patients may benefit from antidepressants. Regarding safety, serotonin reuptake inhibitors and bupropion may have lower rates of manic switch than tricyclic and tetracyclic antidepressants and norepinephrine-serotonin reuptake inhibitors. The frequency and severity of antidepressant-associated mood elevations appear to be greater in bipolar I than bipolar II disorder. Hence, in bipolar I patients antidepressants should be prescribed only as an adjunct to mood-stabilizing medications. PMID:24030475
RAPHAEL, K. G.; SANTIAGO, V.; LOBBEZOO, F.
2017-01-01
Summary Inspired by the international consensus on defining and grading of bruxism (Lobbezoo F, Ahlberg J, Glaros AG, Kato T, Koyano K, Lavigne GJ et al. J Oral Rehabil. 2013;40:2), this commentary examines its contribution and underlying assumptions for defining sleep bruxism (SB). The consensus’ parsimonious redefinition of bruxism as a behaviour is an advance, but we explore an implied question: might SB be more than behaviour? Behaviours do not inherently require clinical treatment, making the consensus-proposed ‘diagnostic grading system’ inappropriate. However, diagnostic grading might be useful, if SB were considered a disorder. Therefore, to fully appreciate the contribution of the consensus statement, we first consider standards and evidence for determining whether SB is a disorder characterised by harmful dysfunction or a risk factor increasing probability of a disorder. Second, the strengths and weaknesses of the consensus statement’s proposed ‘diagnostic grading system’ are examined. The strongest evidence-to-date does not support SB as disorder as implied by ‘diagnosis’. Behaviour alone is not diagnosed; disorders are. Considered even as a grading system of behaviour, the proposed system is weakened by poor sensitivity of self-report for direct polysomnographic (PSG)-classified SB and poor associations between clinical judgments of SB and portable PSG; reliance on dichotomised reports; and failure to consider SB behaviour on a continuum, measurable and definable through valid behavioural observation. To date, evidence for validity of self-report or clinician report in placing SB behaviour on a continuum is lacking, raising concerns about their potential utility in any bruxism behavioural grading system, and handicapping future study of whether SB may be a useful risk factor for, or itself a disorder requiring treatment. PMID:27283599
Measurement Techniques for Transmit Source Clock Jitter for Weak Serial RF Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Schlesinger, Adam M.
2010-01-01
Techniques for filtering clock jitter measurements are developed, in the context of controlling data modulation jitter on an RF carrier to accommodate low signal-to-noise ratio thresholds of high-performance error correction codes. Measurement artifacts from sampling are considered, and a tutorial on interpretation of direct readings is included.
Receiver-Coupling Schemes Based On Optimal-Estimation Theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
Two schemes for reception of weak radio signals conveying digital data via phase modulation provide for mutual coupling of multiple receivers, and coherent combination of outputs of receivers. In both schemes, optimal mutual-coupling weights computed according to Kalman-filter theory, but differ in manner of transmission and combination of outputs of receivers.
Tomkins-Lane, Christy; Melloh, Markus; Lurie, Jon; Smuck, Matt; Battié, Michele C; Freeman, Brian; Samartzis, Dino; Hu, Richard; Barz, Thomas; Stuber, Kent; Schneider, Michael; Haig, Andrew; Schizas, Constantin; Cheung, Jason Pui Yin; Mannion, Anne F; Staub, Lukas; Comer, Christine; Macedo, Luciana; Ahn, Sang-Ho; Takahashi, Kazuhisa; Sandella, Danielle
2016-08-01
Delphi. The aim of this study was to obtain an expert consensus on which history factors are most important in the clinical diagnosis of lumbar spinal stenosis (LSS). LSS is a poorly defined clinical syndrome. Criteria for defining LSS are needed and should be informed by the experience of expert clinicians. Phase 1 (Delphi Items): 20 members of the International Taskforce on the Diagnosis and Management of LSS confirmed a list of 14 history items. An online survey was developed that permits specialists to express the logical order in which they consider the items, and the level of certainty ascertained from the questions. Phase 2 (Delphi Study) Round 1: Survey distributed to members of the International Society for the Study of the Lumbar Spine. Round 2: Meeting of 9 members of Taskforce where consensus was reached on a final list of 10 items. Round 3: Final survey was distributed internationally. Phase 3: Final Taskforce consensus meeting. A total of 279 clinicians from 29 different countries, with a mean of 19 (±SD: 12) years in practice participated. The six top items were "leg or buttock pain while walking," "flex forward to relieve symptoms," "feel relief when using a shopping cart or bicycle," "motor or sensory disturbance while walking," "normal and symmetric foot pulses," "lower extremity weakness," and "low back pain." Significant change in certainty ceased after six questions at 80% (P < .05). This is the first study to reach an international consensus on the clinical diagnosis of LSS, and suggests that within six questions clinicians are 80% certain of diagnosis. We propose a consensus-based set of "seven history items" that can act as a pragmatic criterion for defining LSS in both clinical and research settings, which in the long term may lead to more cost-effective treatment, improved health care utilization, and enhanced patient outcomes. 2.
Evolution of deep-bed filtration of engine exhaust particulates with trapped mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viswanathan, Sandeep; Rothamer, David A.; Foster, David E.
Micro-scale filtration experiments were performed on cordierite filter samples using particulate matter (PM) generated by a spark-ignition direct-injection (SIDI) engine fueled with tier II EEE certification gasoline. Size-resolved mass and number concentrations were obtained from several engine operating conditions. The resultant mass-mobility relationships showed weak dependence on the operating condition. An integrated particle size distribution (IPSD) method was used estimate the PM mass concentration in the exhaust stream from the SIDI engine and a heavy duty diesel (HDD) engine. The average estimated mass concentration between all conditions was ~77****** % of the gravimetric measurements performed on Teflon filters. Despite themore » relatively low elemental carbon fraction (~0.4 to 0.7), the IPSD mass for stoichiometric SIDI exhaust was ~83±38 % of the gravimetric measurement. Identical cordierite filter samples with properties representative of diesel particulate filters were sequentially loaded with PM from the different SIDI engine operating conditions, in order of increasing PM mass concentration. Simultaneous particle size distribution measurements upstream and downstream of the filter sample were used to evaluate filter performance evolution and the instantaneous trapped mass within the filter for two different filter face velocities. The evolution of filtration performance for the different samples was sensitive only to trapped mass, despite using PM from a wide range of operating conditions. Higher filtration velocity resulted in a more rapid shift in the most penetrating particle size towards smaller mobility diameters.« less
An iterated cubature unscented Kalman filter for large-DoF systems identification with noisy data
NASA Astrophysics Data System (ADS)
Ghorbani, Esmaeil; Cha, Young-Jin
2018-04-01
Structural and mechanical system identification under dynamic loading has been an important research topic over the last three or four decades. Many Kalman-filtering-based approaches have been developed for linear and nonlinear systems. For example, to predict nonlinear systems, an unscented Kalman filter was applied. However, from extensive literature reviews, the unscented Kalman filter still showed weak performance on systems with large degrees of freedom. In this research, a modified unscented Kalman filter is proposed by integration of a cubature Kalman filter to improve the system identification performance of systems with large degrees of freedom. The novelty of this work lies on conjugating the unscented transform with the cubature integration concept to find a more accurate output from the transformation of the state vector and its related covariance matrix. To evaluate the proposed method, three different numerical models (i.e., the single degree-of-freedom Bouc-Wen model, the linear 3-degrees-of-freedom system, and the 10-degrees-of-freedom system) are investigated. To evaluate the robustness of the proposed method, high levels of noise in the measured response data are considered. The results show that the proposed method is significantly superior to the traditional UKF for noisy measured data in systems with large degrees of freedom.
Using a Weak CN Spectral Feature as a Marker for Massive AGB Stars in the Andromeda Galaxy
NASA Astrophysics Data System (ADS)
Guhathakurta, Puragra; Kamath, Anika; Sales, Alyssa; Sarukkai, Atmika; Hays, Jon; PHAT Collaboration; SPLASH Collaboration
2017-01-01
The Panchromatic Hubble Andromeda Treasury (PHAT) survey has produced six-filter photometry at near-ultraviolet, optical and nearly infrared wavelengths (F275W, F336W, F475W, F814W, F110W and F160W) for over 100 million stars in the disk of the of the Andromeda galaxy (M31). As part of the Spectroscopic and Photometric Landscape of Andromeda's Stellar Halo (SPLASH) survey, medium resolution (R ~ 2000) spectra covering the wavelength range 4500-9500A were obtained for over 5000 relatively bright stars from the PHAT source catalog using the Keck II 10-meter telescope and DEIMOS spectrograph. While searching for carbon stars in the spectroscopic data set, we discovered a rare population of stars that show a weak CN spectral absorption feature at ~7900A (much weaker than the CN feature in typical carbon stars) along with other spectral absorption features like TiO and the Ca triplet that are generally not present/visible in carbon star spectra but that are typical for normal stars with oxygen rich atmospheres. These 150 or so "weak CN" stars appear to be fairly localized in six-filter space (i.e., in various color-color and color-magnitude diagrams) but are generally offset from carbon stars. Comparison to PARSEC model stellar tracks indicates that these weak CN stars are probably massive (5-10 Msun) asymptotic giant branch (AGB) stars in a relatively short-lived core helium burning phase of their evolution. Careful spectroscopic analysis indicates that the details of the CN spectral feature are about 3-4x weaker in weak CN stars than in carbon stars. The kinematics of weak CN stars are similar to those of other young stars (e.g., massive main sequence stars) and reflect the well ordered rotation of M31's disk.This research project is funded in part by NASA/STScI and the National Science Foundation. Much of this work was carried out by high school students and undergraduates under the auspices of the Science Internship Program and LAMAT program at the University of California Santa Cruz.
Model-based auralizations of violin sound trends accompanying plate-bridge tuning or holding.
Bissinger, George; Mores, Robert
2015-04-01
To expose systematic trends in violin sound accompanying "tuning" only the plates or only the bridge, the first structural acoustics-based model auralizations of violin sound were created by passing a bowed-string driving force measured at the bridge of a solid body violin through the dynamic filter (DF) model radiativity profile "filter" RDF(f) (frequency-dependent pressure per unit driving force, free-free suspension, anechoic chamber). DF model auralizations for the more realistic case of a violin held/played in a reverberant auditorium reveal that holding the violin greatly diminishes its low frequency response, an effect only weakly compensated for by auditorium reverberation.
Sumriddetchkajorn, Sarun; Chaitavon, Kosom
2006-01-01
A surface plasmon resonance (SPR)-based optical touch sensor structure is proposed that provides high switch sensitivity and requires a weak activating force. Our proposed SPR-based optical touch sensor is arranged in a compact Kretschmann-Raether configuration in which the prism acting as our sensor head is coated with a metal nanofilm. Our optical-based noise rejection scheme relies on wavelength filtering, spatial filtering, and high reflectivity of the metal nanofilm, whereas our electrical-based noise reduction is obtained by means of an electrical signal filtering process. In our experimental proof of concept, a visible laser diode at a 655 nm centered wavelength and a prism made from BK7 with a 50 nm thick gold layer on the touching surface are used, showing a 7.85 dB optical contrast ratio for the first touch. An estimated weak mechanical force of <0.1 N is also observed that sufficiently activates the desired electrical load. It is tested for 51 operations without sensor malfunction under typical and very high illumination of 342 and 3000 lx, respectively. In this case, a measured average optical contrast of 0.80 dB is obtained with a +/-0.47 dB fluctuation, implying that the refractive index change in a small 3.2% of the overall active area is enough for our SPR-based optical touch sensor to function properly. Increasing optical contrast in our SPR-based optical touch sensor can be accomplished by using a higher polarization-extinction ratio and a narrower-bandwidth optical beam. A controlled environment and gold-coated surface using the thin-film sputtering technique can help improve the reliability and the durability of our SPR-based optical touch sensor. Other key features include ease of implementation, prevention of a light beam becoming incident on the user, and the ability to accept both strong and weak activating forces.
Zeng, Lu; Kortschak, R Daniel; Raison, Joy M; Bertozzi, Terry; Adelson, David L
2018-01-01
Transposable Elements (TEs) are mobile DNA sequences that make up significant fractions of amniote genomes. However, they are difficult to detect and annotate ab initio because of their variable features, lengths and clade-specific variants. We have addressed this problem by refining and developing a Comprehensive ab initio Repeat Pipeline (CARP) to identify and cluster TEs and other repetitive sequences in genome assemblies. The pipeline begins with a pairwise alignment using krishna, a custom aligner. Single linkage clustering is then carried out to produce families of repetitive elements. Consensus sequences are then filtered for protein coding genes and then annotated using Repbase and a custom library of retrovirus and reverse transcriptase sequences. This process yields three types of family: fully annotated, partially annotated and unannotated. Fully annotated families reflect recently diverged/young known TEs present in Repbase. The remaining two types of families contain a mixture of novel TEs and segmental duplications. These can be resolved by aligning these consensus sequences back to the genome to assess copy number vs. length distribution. Our pipeline has three significant advantages compared to other methods for ab initio repeat identification: 1) we generate not only consensus sequences, but keep the genomic intervals for the original aligned sequences, allowing straightforward analysis of evolutionary dynamics, 2) consensus sequences represent low-divergence, recently/currently active TE families, 3) segmental duplications are annotated as a useful by-product. We have compared our ab initio repeat annotations for 7 genome assemblies to other methods and demonstrate that CARP compares favourably with RepeatModeler, the most widely used repeat annotation package.
Zeng, Lu; Kortschak, R. Daniel; Raison, Joy M.
2018-01-01
Transposable Elements (TEs) are mobile DNA sequences that make up significant fractions of amniote genomes. However, they are difficult to detect and annotate ab initio because of their variable features, lengths and clade-specific variants. We have addressed this problem by refining and developing a Comprehensive ab initio Repeat Pipeline (CARP) to identify and cluster TEs and other repetitive sequences in genome assemblies. The pipeline begins with a pairwise alignment using krishna, a custom aligner. Single linkage clustering is then carried out to produce families of repetitive elements. Consensus sequences are then filtered for protein coding genes and then annotated using Repbase and a custom library of retrovirus and reverse transcriptase sequences. This process yields three types of family: fully annotated, partially annotated and unannotated. Fully annotated families reflect recently diverged/young known TEs present in Repbase. The remaining two types of families contain a mixture of novel TEs and segmental duplications. These can be resolved by aligning these consensus sequences back to the genome to assess copy number vs. length distribution. Our pipeline has three significant advantages compared to other methods for ab initio repeat identification: 1) we generate not only consensus sequences, but keep the genomic intervals for the original aligned sequences, allowing straightforward analysis of evolutionary dynamics, 2) consensus sequences represent low-divergence, recently/currently active TE families, 3) segmental duplications are annotated as a useful by-product. We have compared our ab initio repeat annotations for 7 genome assemblies to other methods and demonstrate that CARP compares favourably with RepeatModeler, the most widely used repeat annotation package. PMID:29538441
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won
2014-08-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choi, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B.; Gupta, Neha; Kohane, Isaac S.; Green, Robert C.; Kong, Sek Won
2014-01-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous SNVs; 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and ensemble genotyping would be essential to minimize false positive DNM candidates. PMID:24829188
Change Detection via Selective Guided Contrasting Filters
NASA Astrophysics Data System (ADS)
Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.
2017-05-01
Change detection scheme based on guided contrasting was previously proposed. Guided contrasting filter takes two images (test and sample) as input and forms the output as filtered version of test image. Such filter preserves the similar details and smooths the non-similar details of test image with respect to sample image. Due to this the difference between test image and its filtered version (difference map) could be a basis for robust change detection. Guided contrasting is performed in two steps: at the first step some smoothing operator (SO) is applied for elimination of test image details; at the second step all matched details are restored with local contrast proportional to the value of some local similarity coefficient (LSC). The guided contrasting filter was proposed based on local average smoothing as SO and local linear correlation as LSC. In this paper we propose and implement new set of selective guided contrasting filters based on different combinations of various SO and thresholded LSC. Linear average and Gaussian smoothing, nonlinear median filtering, morphological opening and closing are considered as SO. Local linear correlation coefficient, morphological correlation coefficient (MCC), mutual information, mean square MCC and geometrical correlation coefficients are applied as LSC. Thresholding of LSC allows operating with non-normalized LSC and enhancing the selective properties of guided contrasting filters: details are either totally recovered or not recovered at all after the smoothing. These different guided contrasting filters are tested as a part of previously proposed change detection pipeline, which contains following stages: guided contrasting filtering on image pyramid, calculation of difference map, binarization, extraction of change proposals and testing change proposals using local MCC. Experiments on real and simulated image bases demonstrate the applicability of all proposed selective guided contrasting filters. All implemented filters provide the robustness relative to weak geometrical discrepancy of compared images. Selective guided contrasting based on morphological opening/closing and thresholded morphological correlation demonstrates the best change detection result.
Effect of interpolation on parameters extracted from seating interface pressure arrays.
Wininger, Michael; Crane, Barbara
2014-01-01
Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.
GALLIEN, Laure; MAZEL, Florent; LAVERGNE, Sébastien; RENAUD, Julien; DOUZET, Rolland; THUILLER, Wilfried
2015-01-01
Despite considerable efforts devoted to investigate the community assembly processes driving plant invasions, few general conclusions have been drawn so far. Three main processes, generally acting as successive filters, are thought to be of prime importance. The invader has to disperse (1st filter) into a suitable environment (2nd filter) and succeed in establishing in recipient communities through competitive interactions (3rd filter) using two strategies: competition avoidance by the use of different resources (resource opportunity), or competitive exclusion of native species. Surprisingly, despite the general consensus on the importance of investigating these three processes and their interplay, they are usually studied independently. Here we aim to analyse these three filters together, by including them all: abiotic environment, dispersal and biotic interactions, into models of invasive species distributions. We first propose a suite of indices (based on species functional dissimilarities) supposed to reflect the two competitive strategies (resource opportunity and competition exclusion). Then, we use a set of generalised linear models to explain the distribution of seven herbaceous invaders in natural communities (using a large vegetation database for the French Alps containing 5,000 community-plots). Finally, we measure the relative importance of competitive interaction indices, identify the type of coexistence mechanism involved and how this varies along environmental gradients. Adding competition indices significantly improved model’s performance, but neither resource opportunity nor competitive exclusion were common strategies among the seven species. Overall, we show that combining environmental, dispersal and biotic information to model invasions has excellent potential for improving our understanding of invader success. PMID:26290653
High-order noise filtering in nontrivial quantum logic gates.
Green, Todd; Uys, Hermann; Biercuk, Michael J
2012-07-13
Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.
Opinion formation models in static and dynamic social networks
NASA Astrophysics Data System (ADS)
Singh, Pramesh
We study models of opinion formation on static as well as dynamic networks where interaction among individuals is governed by widely accepted social theories. In particular, three models of competing opinions based on distinct interaction mechanisms are studied. A common feature in all of these models is the existence of a tipping point in terms of a model parameter beyond which a rapid consensus is reached. In the first model that we study on a static network, a node adopts a particular state (opinion) if a threshold fraction of its neighbors are already in that state. We introduce a few initiator nodes which are in state '1' in a population where every node is in state '0'. Thus, opinion '1' spreads through the population until no further influence is possible. Size of the spread is greatly affected by how these initiator nodes are selected. We find that there exists a critical fraction of initiators pc that is needed to trigger global cascades for a given threshold phi. We also study heuristic strategies for selecting a set of initiator nodes in order to maximize the cascade size. The structural properties of networks also play an important role in the spreading process. We study how the dynamics is affected by changing the clustering in a network. It turns out that local clustering is helpful in spreading. Next, we studied a model where the network is dynamic and interactions are homophilic. We find that homophily-driven rewiring impedes the reaching of consensus and in the absence of committed nodes (nodes that are not influenceable on their opinion), consensus time Tc diverges exponentially with network size N . As we introduce a fraction of committed nodes, beyond a critical value, the scaling of Tc becomes logarithmic in N. We also find that slight change in the interaction rule can produce strikingly different scaling behaviors of T c . However, introducing committed agents in the system drastically improves the scaling of the consensus time regardless of the interaction rules considered. Finally, a three-state (leftist, rightist, centrist) model that couples the dynamics of social balance with an external deradicalizing field is studied. The mean-field analysis shows that for a weak external field, the system exhibits a metastable fixed point and a saddle point in addition to a stable fixed point. However, if the strength of the external field is sufficiently large (larger than a critical value), there is only one (stable) fixed point which corresponds to an all-centrist consensus state (absorbing state). In the weak-field regime, the convergence time to the absorbing state is evaluated using the quasi-stationary(QS) distribution and is found to be in good agreement with the results obtained by numerical simulations.
NASA Astrophysics Data System (ADS)
Asshoff, P. U.; Sambricio, J. L.; Rooney, A. P.; Slizovskiy, S.; Mishchenko, A.; Rakowski, A. M.; Hill, E. W.; Geim, A. K.; Haigh, S. J.; Fal'ko, V. I.; Vera-Marun, I. J.; Grigorieva, I. V.
2017-09-01
Graphene is hailed as an ideal material for spintronics due to weak intrinsic spin-orbit interaction that facilitates lateral spin transport and tunability of its electronic properties, including a possibility to induce magnetism in graphene. Another promising application of graphene is related to its use as a spacer separating ferromagnetic metals (FMs) in vertical magnetoresistive devices, the most prominent class of spintronic devices widely used as magnetic sensors. In particular, few-layer graphene was predicted to act as a perfect spin filter. Here we show that the role of graphene in such devices (at least in the absence of epitaxial alignment between graphene and the FMs) is different and determined by proximity-induced spin splitting and charge transfer with adjacent ferromagnetic metals, making graphene a weak FM electrode rather than a spin filter. To this end, we report observations of magnetoresistance (MR) in vertical Co-graphene-NiFe junctions with 1-4 graphene layers separating the ferromagnets, and demonstrate that the dependence of the MR sign on the number of layers and its inversion at relatively small bias voltages is consistent with spin transport between weakly doped and differently spin-polarized layers of graphene. The proposed interpretation is supported by the observation of an MR sign reversal in biased Co-graphene-hBN-NiFe devices and by comprehensive structural characterization. Our results suggest a new architecture for vertical devices with electrically controlled MR.
NASA Technical Reports Server (NTRS)
Lai, Jonathan Y.
1994-01-01
This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.
Potential radiation control of biofouling bacteria on intake filters
NASA Astrophysics Data System (ADS)
Eichholz, Geoffrey G.; Jones, Cynthia G.; Haynes, Harold E.
The biofouling of filters at deep wells supplying water for industrial and drinking water purposes by various iron- and sulfur-reducing bacteria is a wide-spread problem in the United States and can cause serious economic losses. Among the means of control, steam heating or chemical additives can be applied only intermittently and have their own environmental impact. Preliminary studies have shown that installation of a sealed gamma radiation source may provide an alternative solution. Analysis of a range of water samples from contaminated wells identified many of the samples as rich in barsiderocapsa and barpseudomona bacteria. Static and dynamic experiments on water samples at various does and dose rates have shown that these organisms are relatively radiation-sensitive, with a lethal dose in the range of 200-400Gy (20-40kR). Since the main objective is to restrict growth or deposit of plaque on filters, dose rates of the order of 50-75 Gy/hr would be adequate. Such dose rates could be obtained with relatively weak sources, depending on filter dimensions. A conceptual design for such systems has been proposed.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
Stabilizing IkappaBalpha by "consensus" design.
Ferreiro, Diego U; Cervantes, Carla F; Truhlar, Stephanie M E; Cho, Samuel S; Wolynes, Peter G; Komives, Elizabeth A
2007-01-26
IkappaBalpha is the major regulator of transcription factor NF-kappaB function. The ankyrin repeat region of IkappaBalpha mediates specific interactions with NF-kappaB dimers, but ankyrin repeats 1, 5 and 6 display a highly dynamic character when not in complex with NF-kappaB. Using chemical denaturation, we show here that IkappaBalpha displays two folding transitions: a non-cooperative conversion under weak perturbation, and a major cooperative folding phase upon stronger insult. Taking advantage of a native Trp residue in ankyrin repeat (AR) 6 and engineered Trp residues in AR2, AR4 and AR5, we show that the cooperative transition involves AR2 and AR3, while the non-cooperative transition involves AR5 and AR6. The major structural transition can be affected by single amino acid substitutions converging to the "consensus" ankyrin repeat sequence, increasing the native state stability significantly. We further characterized the structural and dynamic properties of the native state ensemble of IkappaBalpha and the stabilized mutants by H/(2)H exchange mass spectrometry and NMR. The solution experiments were complemented with molecular dynamics simulations to elucidate the microscopic origins of the stabilizing effect of the consensus substitutions, which can be traced to the fast conformational dynamics of the folded ensemble.
Grogl, Max; Boni, Marina; Carvalho, Edgar M.; Chebli, Houda; Cisse, Mamoudou; Diro, Ermias; Fernandes Cota, Gláucia; Erber, Astrid C.; Gadisa, Endalamaw; Handjani, Farhad; Khamesipour, Ali; Llanos-Cuentas, Alejandro; López Carvajal, Liliana; Grout, Lise; Lmimouni, Badre Eddine; Mokni, Mourad; Nahzat, Mohammad Sami; Ben Salah, Afif; Ozbel, Yusuf; Pascale, Juan Miguel; Rizzo Molina, Nidia; Rode, Joelle; Romero, Gustavo; Ruiz-Postigo, José Antonio; Gore Saravia, Nancy; Soto, Jaime; Uzun, Soner; Mashayekhi, Vahid; Vélez, Ivan Dario; Vogt, Florian; Zerpa, Olga; Arana, Byron
2018-01-01
Introduction Progress with the treatment of cutaneous leishmaniasis (CL) has been hampered by inconsistent methodologies used to assess treatment effects. A sizable number of trials conducted over the years has generated only weak evidence backing current treatment recommendations, as shown by systematic reviews on old-world and new-world CL (OWCL and NWCL). Materials and methods Using a previously published guidance paper on CL treatment trial methodology as the reference, consensus was sought on key parameters including core eligibility and outcome measures, among OWCL (7 countries, 10 trial sites) and NWCL (7 countries, 11 trial sites) during two separate meetings. Results Findings and level of consensus within and between OWCL and NWCL sites are presented and discussed. In addition, CL trial site characteristics and capacities are summarized. Conclusions The consensus reached allows standardization of future clinical research across OWCL and NWCL sites. We encourage CL researchers to adopt and adapt as required the proposed parameters and outcomes in their future trials and provide feedback on their experience. The expertise afforded between the two sets of clinical sites provides the basis for a powerful consortium with potential for extensive, standardized assessment of interventions for CL and faster approval of candidate treatments. PMID:29329311
Automated segmentation of comet assay images using Gaussian filtering and fuzzy clustering.
Sansone, Mario; Zeni, Olga; Esposito, Giovanni
2012-05-01
Comet assay is one of the most popular tests for the detection of DNA damage at single cell level. In this study, an algorithm for comet assay analysis has been proposed, aiming to minimize user interaction and providing reproducible measurements. The algorithm comprises two-steps: (a) comet identification via Gaussian pre-filtering and morphological operators; (b) comet segmentation via fuzzy clustering. The algorithm has been evaluated using comet images from human leukocytes treated with a commonly used DNA damaging agent. A comparison of the proposed approach with a commercial system has been performed. Results show that fuzzy segmentation can increase overall sensitivity, giving benefits in bio-monitoring studies where weak genotoxic effects are expected.
Occupational Injury and Illness Surveillance: Conceptual Filters Explain Underreporting
Azaroff, Lenore S.; Levenstein, Charles; Wegman, David H.
2002-01-01
Occupational health surveillance data are key to effective intervention. However, the US Bureau of Labor Statistics survey significantly underestimates the incidence of work-related injuries and illnesses. Researchers supplement these statistics with data from other systems not designed for surveillance. The authors apply the filter model of Webb et al. to underreporting by the Bureau of Labor Statistics, workers’ compensation wage-replacement documents, physician reporting systems, and medical records of treatment charged to workers’ compensation. Mechanisms are described for the loss of cases at successive steps of documentation. Empirical findings indicate that workers repeatedly risk adverse consequences for attempting to complete these steps, while systems for ensuring their completion are weak or absent. PMID:12197968
NASA Astrophysics Data System (ADS)
Prengaman, R. J.; Thurber, R. E.; Bath, W. G.
The usefulness of radar systems depends on the ability to distinguish between signals returned from desired targets and noise. A retrospective processor uses all contacts (or 'plots') from several past radar scans, taking into account all possible target trajectories formed from stored contacts for each input detection. The processor eliminates many false alarms, while retaining those contacts describing resonable trajectories. The employment of a retrospective processor makes it, therefore, possible to obtain large improvements in detection sensitivity in certain important clutter environments. Attention is given to the retrospective processing concept, a theoretical analysis of the multiscan detection process, the experimental evaluation of retrospective data filter, and aspects of retrospective data filter hardware implementation.
Singh, Aditya; Bhatia, Prateek
2016-12-01
Sanger sequencing platforms, such as applied biosystems instruments, generate chromatogram files. Generally, for 1 region of a sequence, we use both forward and reverse primers to sequence that area, in that way, we have 2 sequences that need to be aligned and a consensus generated before mutation detection studies. This work is cumbersome and takes time, especially if the gene is large with many exons. Hence, we devised a rapid automated command system to filter, build, and align consensus sequences and also optionally extract exonic regions, translate them in all frames, and perform an amino acid alignment starting from raw sequence data within a very short time. In full capabilities of Automated Mutation Analysis Pipeline (ASAP), it is able to read "*.ab1" chromatogram files through command line interface, convert it to the FASTQ format, trim the low-quality regions, reverse-complement the reverse sequence, create a consensus sequence, extract the exonic regions using a reference exonic sequence, translate the sequence in all frames, and align the nucleic acid and amino acid sequences to reference nucleic acid and amino acid sequences, respectively. All files are created and can be used for further analysis. ASAP is available as Python 3.x executable at https://github.com/aditya-88/ASAP. The version described in this paper is 0.28.
Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions
NASA Astrophysics Data System (ADS)
Dunca, Argus A.
2017-12-01
This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.
Jiang, Kuosheng; Xu, Guanghua; Liang, Lin; Tao, Tangfei; Gu, Fengshou
2014-07-29
In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.
Tambor, Marzena; Pavlova, Milena; Golinowska, Stanisława; Sowada, Christoph; Groot, Wim
2015-08-01
Although patient charges for health-care services may contribute to a more sustainable health-care financing, they often raise public opposition, which impedes their introduction. Thus, a consensus among the main stakeholders on the presence and role of patient charges should be worked out to assure their successful implementation. To analyse the acceptability of formal patient charges for health-care services in a basic package among different health-care system stakeholders in six Central and Eastern European countries (Bulgaria, Hungary, Lithuania, Poland, Romania and Ukraine). Qualitative data were collected in 2009 via focus group discussions and in-depth interviews with health-care consumers, providers, policy makers and insurers. The same participants were asked to fill in a self-administrative questionnaire. Qualitative and quantitative data are analysed separately to outline similarities and differences in the opinions between the stakeholder groups and across countries. There is a rather weak consensus on patient charges in the countries. Health policy makers and insurers strongly advocate patient charges. Health-care providers overall support charges but their financial profits from the system strongly affects their approval. Consumers are against paying for services, mostly due to poor quality and access to health-care services and inability to pay. To build consensus on patient charges, the payment policy should be responsive to consumers' needs with regard to quality and equity. Transparency and accountability in the health-care system should be improved to enhance public trust and acceptance of patient payments. © 2012 John Wiley & Sons Ltd.
Occupational issues of adults with ADHD
2013-01-01
Background ADHD is a common neurodevelopmental disorder that persists into adulthood. Its symptoms cause impairments in a number of social domains, one of which is employment. We wish to produce a consensus statement on how ADHD affects employment. Methods This consensus development conference statement was developed as a result of a joint international meeting held in July 2010. The consensus committee was international in scope (United Kingdom, mainland Europe, United Arab Emirates) and consisted of individuals from a broad range of backgrounds (Psychiatry, Occupational Medicine, Health Economists, Disability Advisors). The objectives of the conference were to discuss some of the occupational impairments adults with ADHD may face and how to address these problems from an inclusive perspective. Furthermore the conference looked at influencing policy and decision making at a political level to address impaired occupational functioning in adults with ADHD and fears around employing people with disabilities in general. Results The consensus was that there were clear weaknesses in the current arrangements in the UK and internationally to address occupational difficulties. More so, Occupational Health was not wholly integrated and used as a means of making positive changes to the workplace, but rather as a superfluous last resort that employers tried to avoid. Furthermore the lack of cross professional collaboration on occupational functioning in adults with ADHD was a significant problem. Conclusions Future research needs to concentrate on further investigating occupational functioning in adults with ADHD and pilot exploratory initiatives and tools, leading to a better and more informed understanding of possible barriers to employment and potential schemes to put in place to address these problems. PMID:23414364
Gilaberte, Y; Aguilar, M; Almagro, M; Correia, O; Guillén, C; Harto, A; Pérez-García, B; Pérez-Pérez, L; Redondo, P; Sánchez-Carpintero, I; Serra-Guillén, C; Valladares, L M
2015-10-01
Daylight-mediated photodynamic therapy (PDT) is a new type of PDT that is as effective as conventional PDT in grade 1 and 2 actinic keratosis but with fewer adverse effects, resulting in greater efficiency. The climatic conditions in the Iberian Peninsula require an appropriately adapted consensus protocol. We describe a protocol for the treatment of grade 1 and 2 actinic keratosis with daylight-mediated PDT and methyl aminolevulinate (MAL) adapted to the epidemiological and clinical characteristics of Spanish and Portuguese patients and the climatic conditions of both countries. Twelve dermatologists from different parts of Spain and Portugal with experience in the treatment of actinic keratosis with PDT convened to draft a consensus statement for daylight-mediated PDT with MAL in these countries. Based on a literature review and their own clinical experience, the group developed a recommended protocol. According to the recommendations adopted, patients with multiple grade 1 and 2 lesions, particularly those at risk of developing cancer, are candidates for this type of therapy. Daylight-mediated PDT can be administered throughout the year, although it is not indicated at temperatures below 10°C or at excessively high temperatures. Likewise, therapy should not be administered when it is raining, snowing, or foggy. The procedure is simple, requiring application of a sunscreen with a protection factor of at least 30 based exclusively on organic filters, appropriate preparation of the lesions, application of MAL without occlusion, and activation in daylight for 2hours. This consensus statement represents a practical and detailed guideline to achieve maximum effectiveness of daylight-mediated PDT with MAL in Spain and Portugal with minimal adverse effects. Copyright © 2015 Elsevier España, S.L.U. and AEDV. All rights reserved.
Wavelength Scanning with a Tilting Interference Filter for Glow-Discharge Elemental Imaging.
Storey, Andrew P; Ray, Steven J; Hoffmann, Volker; Voronov, Maxim; Engelhard, Carsten; Buscher, Wolfgang; Hieftje, Gary M
2017-06-01
Glow discharges have long been used for depth profiling and bulk analysis of solid samples. In addition, over the past decade, several methods of obtaining lateral surface elemental distributions have been introduced, each with its own strengths and weaknesses. Challenges for each of these techniques are acceptable optical throughput and added instrumental complexity. Here, these problems are addressed with a tilting-filter instrument. A pulsed glow discharge is coupled to an optical system comprising an adjustable-angle tilting filter, collimating and imaging lenses, and a gated, intensified charge-coupled device (CCD) camera, which together provide surface elemental mapping of solid samples. The tilting-filter spectrometer is instrumentally simpler, produces less image distortion, and achieves higher optical throughput than a monochromator-based instrument, but has a much more limited tunable spectral range and poorer spectral resolution. As a result, the tilting-filter spectrometer is limited to single-element or two-element determinations, and only when the target spectral lines fall within an appropriate spectral range and can be spectrally discerned. Spectral interferences that result from heterogeneous impurities can be flagged and overcome by observing the spatially resolved signal response across the available tunable spectral range. The instrument has been characterized and evaluated for the spatially resolved analysis of glow-discharge emission from selected but representative samples.
Algaddafi, Ali; Altuwayjiri, Saud A; Ahmed, Oday A; Daho, Ibrahim
2017-01-01
Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances.
Altuwayjiri, Saud A.; Ahmed, Oday A.; Daho, Ibrahim
2017-01-01
Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances. PMID:28540362
Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes
Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin
2012-01-01
Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993
Common Viral Integration Sites Identified in Avian Leukosis Virus-Induced B-Cell Lymphomas
Justice, James F.; Morgan, Robin W.
2015-01-01
ABSTRACT Avian leukosis virus (ALV) induces B-cell lymphoma and other neoplasms in chickens by integrating within or near cancer genes and perturbing their expression. Four genes—MYC, MYB, Mir-155, and TERT—have previously been identified as common integration sites in these virus-induced lymphomas and are thought to play a causal role in tumorigenesis. In this study, we employ high-throughput sequencing to identify additional genes driving tumorigenesis in ALV-induced B-cell lymphomas. In addition to the four genes implicated previously, we identify other genes as common integration sites, including TNFRSF1A, MEF2C, CTDSPL, TAB2, RUNX1, MLL5, CXorf57, and BACH2. We also analyze the genome-wide ALV integration landscape in vivo and find increased frequency of ALV integration near transcriptional start sites and within transcripts. Previous work has shown ALV prefers a weak consensus sequence for integration in cultured human cells. We confirm this consensus sequence for ALV integration in vivo in the chicken genome. PMID:26670384
NASA Astrophysics Data System (ADS)
Vio, R.; Andreani, P.
2016-05-01
The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
NASA Astrophysics Data System (ADS)
Schneider, Simon; Thomas, Christine; Dokht, Ramin M. H.; Gu, Yu Jeffrey; Chen, Yunfeng
2018-02-01
Due to uneven earthquake source and receiver distributions, our abilities to isolate weak signals from interfering phases and reconstruct missing data are fundamental to improving the resolution of seismic imaging techniques. In this study, we introduce a modified frequency-wavenumber (fk) domain based approach using a `Projection Onto Convex Sets' (POCS) algorithm. POCS takes advantage of the sparsity of the dominating energies of phase arrivals in the fk domain, which enables an effective detection and reconstruction of the weak seismic signals. Moreover, our algorithm utilizes the 2-D Fourier transform to perform noise removal, interpolation and weak-phase extraction. To improve the directional resolution of the reconstructed data, we introduce a band-stop 2-D Fourier filter to remove the energy of unwanted, interfering phases in the fk domain, which significantly increases the robustness of the signal of interest. The effectiveness and benefits of this method are clearly demonstrated using both simulated and actual broadband recordings of PP precursors from an array located in Tanzania. When used properly, this method could significantly enhance the resolution of weak crust and mantle seismic phases.
Bessel smoothing filter for spectral-element mesh
NASA Astrophysics Data System (ADS)
Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.
2017-06-01
Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Rosen, Mark; Madabhushi, Anant
2008-03-01
Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.
NASA Astrophysics Data System (ADS)
Escalante, George
2017-05-01
Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.
Cunha, Eva S; Hatem, Christine L; Barrick, Doug
2016-08-01
Biomass deconstruction to small simple sugars is a potential approach to biofuels production; however, the highly recalcitrant nature of biomass limits the economic viability of this approach. Thus, research on efficient biomass degradation is necessary to achieve large-scale production of biofuels. Enhancement of cellulolytic activity by increasing synergism between cellulase enzymes holds promise in achieving high-yield biofuels production. Here we have inserted cellulase pairs from extremophiles into hyperstable α-helical consensus ankyrin repeat domain scaffolds. Such chimeric constructs allowed us to optimize arrays of enzyme pairs against a variety of cellulolytic substrates. We found that endocellulolytic domains CelA (CA) and Cel12A (C12A) act synergistically in the context of ankyrin repeats, with both three and four repeat spacing. The extent of synergy differs for different substrates. Also, having C12A N-terminal to CA provides greater synergy than the reverse construct, especially against filter paper. In contrast, we do not see synergy for these enzymes in tandem with CelK (CK) catalytic domain, a larger exocellulase, demonstrating the importance of enzyme identity in synergistic enhancement. Furthermore, we found endocellulases CelD and CA with three repeat spacing to act synergistically against filter paper. Importantly, connecting CA and C12A with a disordered linker of similar contour length shows no synergistic enhancement, indicating that synergism results from connecting these domains with folded ankyrin repeats. These results show that ankyrin arrays can be used to vary spacing and orientation between enzymes, helping to design and optimize artificial cellulosomes, providing a novel architecture for synergistic enhancement of enzymatic cellulose degradation. Proteins 2016; 84:1043-1054. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Cunha, Eva S.; Hatem, Christine L.; Barrick, Doug
2017-01-01
Biomass deconstruction to small simple sugars is a potential approach to biofuels production, however the highly recalcitrant nature of biomass limits the economic viability of this approach. Thus, research on efficient biomass degradation is necessary to achieve large-scale production of biofuels. Enhancement of cellulolytic activity by increasing synergism between cellulase enzymes holds promise in achieving high-yield biofuels production. Here we have inserted cellulase pairs from extremophiles into hyper-stable α-helical consensus ankyrin repeat domain scaffolds. Such chimeric constructs allowed us to optimize arrays of enzyme pairs against a variety of cellulolytic substrates. We found that endocellulolytic domains CelA (CA) and Cel12A (C12A) act synergistically in the context of ankyrin repeats, with both three and four repeat spacing. The extent of synergy differs for different substrates. Also, having C12A N-terminal to CA provides greater synergy than the reverse construct, especially against filter paper. In contrast, we do not see synergy for these enzymes in tandem with CelK (CK) catalytic domain, a larger exocellulase, demonstrating the importance of enzyme identity in synergistic enhancement. Furthermore, we found endocellulases CelD and CA with three repeat spacing to act synergistically against filter paper. Importantly, connecting CA and C12A with a disordered linker of similar contour length, shows no synergistic enhancement, indicating that synergism results from connecting these domains with folded ankyrin repeats. These results show that ankyrin arrays can be used to vary spacing and orientation between enzymes, helping to design and optimize artificial cellulosomes, providing a novel architecture for synergistic enhancement of enzymatic cellulose degradation. PMID:27071357
Well-posed and stable transmission problems
NASA Astrophysics Data System (ADS)
Nordström, Jan; Linders, Viktor
2018-07-01
We introduce the notion of a transmission problem to describe a general class of problems where different dynamics are coupled in time. Well-posedness and stability are analysed for continuous and discrete problems using both strong and weak formulations, and a general transmission condition is obtained. The theory is applied to the coupling of fluid-acoustic models, multi-grid implementations, adaptive mesh refinements, multi-block formulations and numerical filtering.
Sensor Management for Fighter Applications
2006-06-01
has consistently shown that by directly estimating the prob- ability density of a target state using a track - before - detect scheme, weak and densely... track - before - detect nonlinear filter was constructed to estimate the joint density of all state variables. A simulation that emulates estimator...targets in clutter and noise from sensed kinematic and identity data. Among the most capable is track - before - detect (TBD), which delivers
Extraction of a Weak Co-Channel Interfering Communication Signal Using Adaptive Filtering
2015-03-01
is unlimited THIS PAGE INTENTIONALLY LEFT BLANK REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704–0188 Public reporting burden for this collection...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...aspect of this collection of information, including suggestions for reducing this burden to Washington headquarters Services, Directorate for
NASA Astrophysics Data System (ADS)
Nikitin, Alexei V.; Epard, Marc; Lancaster, John B.; Lutes, Robert L.; Shumaker, Eric A.
2012-12-01
A strong digital communication transmitter in close physical proximity to a receiver of a weak signal can noticeably interfere with the latter even when the respective channels are tens or hundreds of megahertz apart. When time domain observations are made in the signal chain of the receiver between the first mixer and the baseband, this interference is likely to appear impulsive. The impulsive nature of this interference provides an opportunity to reduce its power by nonlinear filtering, improving the quality of the receiver channel. This article describes the mitigation, by a particular nonlinear filter, of the impulsive out-of-band (OOB) interference induced in High Speed Downlink Packet Access (HSDPA) by WiFi transmissions, protocols which coexist in many 3G smartphones and mobile hotspots. Our measurements show a decrease in the maximum error-free bit rate of a 1.95 GHz HSDPA receiver caused by the impulsive interference from an OOB 2.4 GHz WiFi transmission, sometimes down to a small fraction of the rate observed in the absence of the interference. We apply a nonlinear SPART filter to recover a noticeable portion of the lost rate and maintain an error-free connection under much higher levels of the WiFi interference than a receiver that does not contain such a filter. These measurements support our wider investigation of OOB interference resulting from digital modulation, which appears impulsive in a receiver, and its mitigation by nonlinear filters.
Filter design for the detection of compact sources based on the Neyman-Pearson detector
NASA Astrophysics Data System (ADS)
López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.
2005-05-01
This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.
Portable remote laser sensor for methane leak detection
NASA Technical Reports Server (NTRS)
Grant, W. B.; Hinkley, E. D., Jr. (Inventor)
1984-01-01
A portable laser system for remote detection of methane gas leaks and concentrations is disclosed. The system transmitter includes first and second lasers, tuned respectively to a wavelength coincident with a strong absorption line of methane and a reference wavelength which is weakly absorbed by methane gas. The system receiver includes a spherical mirror for collecting the reflected laser radiation and focusing the collected radiation through a narrowband optical filter onto an optial detector. The filter is tuned to the wavelength of the two lasers, and rejects background noise. The output of the optical detector is processed by a lock-in detector synchronized to the chopper, and which measures the difference between the first wavelength signal and the reference wavelength signal.
Capacitive detection of micromotions: Monitoring ballistics of a developing avian embryo
NASA Astrophysics Data System (ADS)
Szymanski, Jan A.; Pawlak, Krzysztof; Wasowicz, Pawel; Moscicki, Jozef K.
2002-09-01
An instrument for noninvasive monitoring of very weak biomechanical activities of small living organisms is described. The construction is sufficiently flexible to permit a range of studies including developing embryos of oviparous animals, pests that live in loose materials and timber, and insects that develop in cocoons. Motions are detected by monitoring a current generated by the fluctuating position of the object-loaded electrode of a capacitive sensor. To maximize the signal, oscillations of the electrode are mechanically enhanced and the current is amplified and filtered by a two-stage signal amplifier and a bank of six active Butterworth filters. The device is optimized to ballistocardiography of hen embryos. The sensitivity achieved makes possible quantitative studies of heart activity of 7-day-old embryos.
Multimodal transmission property in a liquid-filled photonic crystal fiber
NASA Astrophysics Data System (ADS)
Lin, Wei; Miao, Yinping; Song, Binbin; Zhang, Hao; Liu, Bo; Liu, Yange; Yan, Donglin
2015-02-01
The multimode interference (MMI) effect in a liquid-filled photonic crystal fiber (PCF) has been experimentally demonstrated by fully infiltrating the air-hole cladding of a solid-core PCF with the refractive index (RI) matching liquid whose RI is close to the silica background. Due to the weak mode confinement capability of the cladding region, several high-order modes are excited to establish the multimode interference effect. The multimode interferometer shows a good temperature tunability of 12.30 nm/K, which makes it a good candidate for a highly tunable optical filtering as well as temperature sensing applications. Furthermore, this MMI effect would have great promise in various applications such as highly sensitive multi-parameter sensing, tunable optically filtering, and surface-enhanced Raman scattering.
Application of based on improved wavelet algorithm in fiber temperature sensor
NASA Astrophysics Data System (ADS)
Qi, Hui; Tang, Wenjuan
2018-03-01
It is crucial point that accurate temperature in distributed optical fiber temperature sensor. In order to solve the problem of temperature measurement error due to weak Raman scattering signal and strong noise in system, a new based on improved wavelet algorithm is presented. On the basis of the traditional modulus maxima wavelet algorithm, signal correlation is considered to improve the ability to capture signals and noise, meanwhile, combined with wavelet decomposition scale adaptive method to eliminate signal loss or noise not filtered due to mismatch scale. Superiority of algorithm filtering is compared with others by Matlab. At last, the 3km distributed optical fiber temperature sensing system is used for verification. Experimental results show that accuracy of temperature generally increased by 0.5233.
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Performance Analysis of Local Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
Tong, Xin T.
2018-03-01
Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.
2015-02-01
Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less
Controllable spin polarization and spin filtering in a zigzag silicene nanoribbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farokhnezhad, Mohsen, E-mail: Mohsen-farokhnezhad@physics.iust.ac.ir; Esmaeilzadeh, Mahdi, E-mail: mahdi@iust.ac.ir; Pournaghavi, Nezhat
2015-05-07
Using non-equilibrium Green's function, we study the spin-dependent electron transport properties in a zigzag silicene nanoribbon. To produce and control spin polarization, it is assumed that two ferromagnetic strips are deposited on the both edges of the silicene nanoribbon and an electric field is perpendicularly applied to the nanoribbon plane. The spin polarization is studied for both parallel and anti-parallel configurations of exchange magnetic fields induced by the ferromagnetic strips. We find that complete spin polarization can take place in the presence of perpendicular electric field for anti-parallel configuration and the nanoribbon can work as a perfect spin filter. Themore » spin direction of transmitted electrons can be easily changed from up to down and vice versa by reversing the electric field direction. For parallel configuration, perfect spin filtering can occur even in the absence of electric field. In this case, the spin direction can be changed by changing the electron energy. Finally, we investigate the effects of nonmagnetic Anderson disorder on spin dependent conductance and find that the perfect spin filtering properties of nanoribbon are destroyed by strong disorder, but the nanoribbon retains these properties in the presence of weak disorder.« less
A Consensus Definition of Cataplexy in Mouse Models of Narcolepsy
Scammell, Thomas E.; Willie, Jon T.; Guilleminault, Christian; Siegel, Jerome M.
2009-01-01
People with narcolepsy often have episodes of cataplexy, brief periods of muscle weakness triggered by strong emotions. Many researchers are now studying mouse models of narcolepsy, but definitions of cataplexy-like behavior in mice differ across labs. To establish a common language, the International Working Group on Rodent Models of Narcolepsy reviewed the literature on cataplexy in people with narcolepsy and in dog and mouse models of narcolepsy and then developed a consensus definition of murine cataplexy. The group concluded that murine cataplexy is an abrupt episode of nuchal atonia lasting at least 10 seconds. In addition, theta activity dominates the EEG during the episode, and video recordings document immobility. To distinguish a cataplexy episode from REM sleep after a brief awakening, at least 40 seconds of wakefulness must precede the episode. Bouts of cataplexy fitting this definition are common in mice with disrupted orexin/hypocretin signaling, but these events almost never occur in wild type mice. It remains unclear whether murine cataplexy is triggered by strong emotions or whether mice remain conscious during the episodes as in people with narcolepsy. This working definition provides helpful insights into murine cataplexy and should allow objective and accurate comparisons of cataplexy in future studies using mouse models of narcolepsy. Citation: Scammell TE; Willie JT; Guilleminault C; Siegel JM. A consensus definition of cataplexy in mouse models of narcolepsy. SLEEP 2009;32(1):111-116. PMID:19189786
Consensus on Changing Trends, Attitudes, and Concepts of Asian Beauty.
Liew, Steven; Wu, Woffles T L; Chan, Henry H; Ho, Wilson W S; Kim, Hee-Jin; Goodman, Greg J; Peng, Peter H L; Rogers, John D
2016-04-01
Asians increasingly seek non-surgical facial esthetic treatments, especially at younger ages. Published recommendations and clinical evidence mostly reference Western populations, but Asians differ from them in terms of attitudes to beauty, structural facial anatomy, and signs and rates of aging. A thorough knowledge of the key esthetic concerns and requirements for the Asian face is required to strategize appropriate facial esthetic treatments with botulinum toxin and hyaluronic acid (HA) fillers. The Asian Facial Aesthetics Expert Consensus Group met to develop consensus statements on concepts of facial beauty, key esthetic concerns, facial anatomy, and aging in Southeastern and Eastern Asians, as a prelude to developing consensus opinions on the cosmetic facial use of botulinum toxin and HA fillers in these populations. Beautiful and esthetically attractive people of all races share similarities in appearance while retaining distinct ethnic features. Asians between the third and sixth decades age well compared with age-matched Caucasians. Younger Asians' increasing requests for injectable treatments to improve facial shape and three-dimensionality often reflect a desire to correct underlying facial structural deficiencies or weaknesses that detract from ideals of facial beauty. Facial esthetic treatments in Asians are not aimed at Westernization, but rather the optimization of intrinsic Asian ethnic features, or correction of specific underlying structural features that are perceived as deficiencies. Thus, overall facial attractiveness is enhanced while retaining esthetic characteristics of Asian ethnicity. Because Asian patients age differently than Western patients, different management and treatment planning strategies are utilized. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266.
Proceedings of the Conference on Moments and Signal
NASA Astrophysics Data System (ADS)
Purdue, P.; Solomon, H.
1992-09-01
The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.
Interactions of trace metals with hydrogels and filter membranes used in DET and DGT techniques.
Garmo, Oyvind A; Davison, William; Zhang, Hao
2008-08-01
Equilibrium partitioning of trace metals between bulk solution and hydrogels/filter was studied. Under some conditions, trace metal concentrations were higher in the hydrogels or filter membranes compared to bulk solution (enrichment). In synthetic soft water, enrichment of cationic trace metals in polyacrylamide hydrogels decreased with increasing trace metal concentration. Enrichment was little affected by Ca and Mg in the concentration range typically encountered in natural freshwaters, indicating high affinity but low capacity binding of trace metals to solid structure in polyacrylamide gels. The apparent binding strength decreased in the sequence: Cu > Pb > Ni approximately to Cd approximately to Co and a low concentration of cationic Cu eliminated enrichment of weakly binding trace metal cations. The polyacrylamide gels also had an affinity for fulvic acid and/or its trace metal complexes. Enrichment of cationic Cd in agarose gel and hydrophilic polyethersulfone filter was independent of concentration (10 nM to 5 microM) but decreased with increasing Ca/ Mg concentration and ionic strength, suggesting that it is mainly due to electrostatic interactions. However, Cu and Pb were enriched even after equilibration in seawater, indicating that these metals additionally bind to sites within the agarose gel and filter. Compared to the polyacrylamide gels, agarose gel had a lower affinity for metal-fulvic complexes. Potential biases in measurements made with the diffusive equilibration in thin-films (DET) technique, identified by this work, are discussed.
Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.
Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko
2008-01-01
The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.
Kotnik, Kristina; Kosjek, Tina; Žegura, Bojana; Filipič, Metka; Heath, Ester
2016-03-01
This study investigates the environmental fate of eight benzophenone derivatives (the pharmaceutical ketoprofen, its phototransformation products 3-ethylbenzophenone and 3-acetylbenzophenone, and five benzophenone-type UV filters) by evaluating their photolytic behaviour. In addition, the genotoxicity of these compounds and the produced photodegradation mixtures was studied. Laboratory-scale irradiation experiments using a medium pressure UV lamp revealed that photodegradation of benzophenones follows pseudo-first-order kinetics. Ketoprofen was the most photolabile (t1/2 = 0.8 min), while UV filters were more resistant to UV light with t1/2 between 17 and 99 h. The compounds were also exposed to irradiation by natural sunlight and showed similar photostability as predicted under laboratory conditions. Solar photodegradation experiments were performed in distilled water, lake and seawater, and revealed that photosensitizers present in natural waters significantly affect the photolytic behaviour of the investigated compounds. In this case, the presence of lake water resulted in accelerated photodecomposition, while seawater showed different effects on photodegradation, depending on a compound. Further, it was shown that the transformation products of ketoprofen 3-ethylbenzophenone and 3-acetylbenzophenone were formed under environmental conditions when ketoprofen was exposed to natural sunlight. Genotoxicity testing of parent benzophenone compounds using the SOS/umuC assay revealed that UV filters exhibited weak genotoxic activity in the presence of a metabolic activation system, however the concentrations tested were much higher than found in the environment (≥125 μg mL(-1)). After irradiation of benzophenones, the produced photodegradation mixtures showed that, with the exception of benzophenone that exhibited weak genotoxic activity, all the other compounds tested did not elicit any activity when exposed to UV light. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimized detection of shear peaks in weak lensing maps
NASA Astrophysics Data System (ADS)
Marian, Laura; Smith, Robert E.; Hilbert, Stefan; Schneider, Peter
2012-06-01
We present a new method to extract cosmological constraints from weak lensing (WL) peak counts, which we denote as ‘the hierarchical algorithm’. The idea of this method is to combine information from WL maps sequentially smoothed with a series of filters of different size, from the largest down to the smallest, thus increasing the cosmological sensitivity of the resulting peak function. We compare the cosmological constraints resulting from the peak abundance measured in this way and the abundance obtained by using a filter of fixed size, which is the standard practice in WL peak studies. For this purpose, we employ a large set of WL maps generated by ray tracing through N-body simulations, and the Fisher matrix formalism. We find that if low signal-to-noise ratio (?) peaks are included in the analysis (?), the hierarchical method yields constraints significantly better than the single-sized filtering. For a large future survey such as Euclid or Large Synoptic Survey Telescope, combined with information from a cosmic microwave background experiment like Planck, the results for the hierarchical (single-sized) method are Δns= 0.0039 (0.004), ΔΩm= 0.002 (0.0045), Δσ8= 0.003 (0.006) and Δw= 0.019 (0.0525). This forecast is conservative, as we assume no knowledge of the redshifts of the lenses, and consider a single broad bin for the redshifts of the sources. If only peaks with ? are considered, then there is little difference between the results of the two methods. We also examine the statistical properties of the hierarchical peak function: Its covariance matrix has off-diagonal terms for bins with ? and aperture mass of M < 3 × 1014 h-1 M⊙, the higher bins being largely uncorrelated and therefore well described by a Poisson distribution.
Neerincx, Pieter BT; Casel, Pierrot; Prickett, Dennis; Nie, Haisheng; Watson, Michael; Leunissen, Jack AM; Groenen, Martien AM; Klopp, Christophe
2009-01-01
Background Reliable annotation linking oligonucleotide probes to target genes is essential for functional biological analysis of microarray experiments. We used the IMAD, OligoRAP and sigReannot pipelines to update the annotation for the ARK-Genomics Chicken 20 K array as part of a joined EADGENE/SABRE workshop. In this manuscript we compare their annotation strategies and results. Furthermore, we analyse the effect of differences in updated annotation on functional analysis for an experiment involving Eimeria infected chickens and finally we propose guidelines for optimal annotation strategies. Results IMAD, OligoRAP and sigReannot update both annotation and estimated target specificity. The 3 pipelines can assign oligos to target specificity categories although with varying degrees of resolution. Target specificity is judged based on the amount and type of oligo versus target-gene alignments (hits), which are determined by filter thresholds that users can adjust based on their experimental conditions. Linking oligos to annotation on the other hand is based on rigid rules, which differ between pipelines. For 52.7% of the oligos from a subset selected for in depth comparison all pipelines linked to one or more Ensembl genes with consensus on 44.0%. In 31.0% of the cases none of the pipelines could assign an Ensembl gene to an oligo and for the remaining 16.3% the coverage differed between pipelines. Differences in updated annotation were mainly due to different thresholds for hybridisation potential filtering of oligo versus target-gene alignments and different policies for expanding annotation using indirect links. The differences in updated annotation packages had a significant effect on GO term enrichment analysis with consensus on only 67.2% of the enriched terms. Conclusion In addition to flexible thresholds to determine target specificity, annotation tools should provide metadata describing the relationships between oligos and the annotation assigned to them. These relationships can then be used to judge the varying degrees of reliability allowing users to fine-tune the balance between reliability and coverage. This is important as it can have a significant effect on functional microarray analysis as exemplified by the lack of consensus on almost one third of the terms found with GO term enrichment analysis based on updated IMAD, OligoRAP or sigReannot annotation. PMID:19615109
Neerincx, Pieter Bt; Casel, Pierrot; Prickett, Dennis; Nie, Haisheng; Watson, Michael; Leunissen, Jack Am; Groenen, Martien Am; Klopp, Christophe
2009-07-16
Reliable annotation linking oligonucleotide probes to target genes is essential for functional biological analysis of microarray experiments. We used the IMAD, OligoRAP and sigReannot pipelines to update the annotation for the ARK-Genomics Chicken 20 K array as part of a joined EADGENE/SABRE workshop. In this manuscript we compare their annotation strategies and results. Furthermore, we analyse the effect of differences in updated annotation on functional analysis for an experiment involving Eimeria infected chickens and finally we propose guidelines for optimal annotation strategies. IMAD, OligoRAP and sigReannot update both annotation and estimated target specificity. The 3 pipelines can assign oligos to target specificity categories although with varying degrees of resolution. Target specificity is judged based on the amount and type of oligo versus target-gene alignments (hits), which are determined by filter thresholds that users can adjust based on their experimental conditions. Linking oligos to annotation on the other hand is based on rigid rules, which differ between pipelines.For 52.7% of the oligos from a subset selected for in depth comparison all pipelines linked to one or more Ensembl genes with consensus on 44.0%. In 31.0% of the cases none of the pipelines could assign an Ensembl gene to an oligo and for the remaining 16.3% the coverage differed between pipelines. Differences in updated annotation were mainly due to different thresholds for hybridisation potential filtering of oligo versus target-gene alignments and different policies for expanding annotation using indirect links. The differences in updated annotation packages had a significant effect on GO term enrichment analysis with consensus on only 67.2% of the enriched terms. In addition to flexible thresholds to determine target specificity, annotation tools should provide metadata describing the relationships between oligos and the annotation assigned to them. These relationships can then be used to judge the varying degrees of reliability allowing users to fine-tune the balance between reliability and coverage. This is important as it can have a significant effect on functional microarray analysis as exemplified by the lack of consensus on almost one third of the terms found with GO term enrichment analysis based on updated IMAD, OligoRAP or sigReannot annotation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalva, Sanjeeva P., E-mail: skalva@partners.org; Athanasoulis, Christos A.; Fan, C.-M.
The purpose of the study was to assess the clinical safety and efficacy of the 'Recovery{sup TM}' (Bard) inferior vena cava (IVC) filter. We retrospectively evaluated the clinical and imaging data of patients who had a 'Recovery{sup TM}' IVC filter placed between January 2003 and December 2004 in our institution. The clinical presentation, indications, and procedure-related complications during placement and retrieval were evaluated. Follow-up computed tomography (CT) examinations of the abdomen and chest were evaluated for filter-related complications and pulmonary embolism (PE), respectively. 'Recovery' filters were placed in 96 patients (72 males and 24 females; age range: 16-87 years; mean:more » 46 years). Twenty-four patients presented with PE, 13 with deep vein thrombosis (DVT) and 2 with both PE and DVT. The remaining 57 patients had no symptoms of thromboembolism. Indications for filter placement included contraindication to anticoagulation (n = 27), complication of anticoagulation (n = 3), failure of anticoagulation (n = 5), and prophylaxis (n = 61). The device was successfully deployed in the infrarenal (n = 95) or suprarenal (n = 1) IVC through a femoral vein approach. Retrieval was attempted in 11 patients after a mean period of 117 days (range: 24-426). The filter was successfully removed in nine patients (82%). Failure of retrieval was due to technical difficulty (n = 1) and the presence of thrombus in the filter (n = 1). One of the nine patients who had the filter removed developed IVC thrombus after retrieval and another had an intimal tear of the IVC. Follow-up abdominal CT (n = 40) at a mean of 80 days (range: 1-513) showed penetration of the IVC by the filter arms in 11, of which 3 had fracture of filter components. In one patient, a broken arm migrated into the pancreas. Asymmetric deployment of the filter legs was seen in 12 patients and thrombus within the filter in 2 patients. No filter migration or caval occlusion was encountered. Follow-up chest CT (n = 27) at a mean of 63 days (range: 1-386) showed PE in one patient (3%). During clinical follow-up, 12 of 96 patients developed symptoms of PE and only 1 of the 12 had PE on CT. There was no fatal pulmonary embolism in our group of patients following 'Recovery' filter placement. However, the current version of the filter is associated with structure weakness, a high incidence of IVC wall penetration, and asymmetric deployment of the filter legs.« less
NASA Astrophysics Data System (ADS)
López, Cristian; Zhong, Wei; Lu, Siliang; Cong, Feiyun; Cortese, Ignacio
2017-12-01
Vibration signals are widely used for bearing fault detection and diagnosis. When signals are acquired in the field, usually, the faulty periodic signal is weak and is concealed by noise. Various de-noising methods have been developed to extract the target signal from the raw signal. Stochastic resonance (SR) is a technique that changed the traditional denoising process, in which the weak periodic fault signal can be identified by adding an expression, the potential, to the raw signal and solving a differential equation problem. However, current SR methods have some deficiencies such us limited filtering performance, low frequency input signal and sequential search for optimum parameters. Consequently, in this study, we explore the application of SR based on the FitzHug-Nagumo (FHN) potential in rolling bearing vibration signals. Besides, we improve the search of the SR optimum parameters by the use of particle swarm optimization (PSO). The effectiveness of the proposed method is verified by using both simulated and real bearing data sets.
Re-analysis of correlations among four impulsivity scales.
Gallardo-Pujol, David; Andrés-Pueyo, Antonio
2006-08-01
Impulsivity plays a key role in normal and pathological behavior. Although there is some consensus about its conceptualization, there have been many attempts to build a multidimensional tool due to the lack of agreement in how to measure it. A recent study claimed support for a three-dimensional structure of impulsivity, however with weak empirical support. By re-analysing those data, a four-factor structure was found to describe the correlation matrix much better. The debate remains open and further research is needed to clarify the factor structure. The desirability of constructing new measures, perhaps analogously to the Wechsler Intelligence Scale, is emphasized.
Design of a Mechanical-Tunable Filter Spectrometer for Noninvasive Glucose Measurement
NASA Astrophysics Data System (ADS)
Saptari, Vidi; Youcef-Toumi, Kamal
2004-05-01
The development of an accurate and reliable noninvasive near-infrared (NIR) glucose sensor hinges on the success in addressing the sensitivity and the specificity problems associated with the weak glucose signals and the overlapping NIR spectra. Spectroscopic hardware parameters most relevant to noninvasive blood glucose measurement are discussed, which include the optical throughput, integration time, spectral range, and the spectral resolution. We propose a unique spectroscopic system using a continuously rotating interference filter, which produces a signal-to-noise ratio of the order of 10^5 and is estimated to be the minimum required for successful in vivo glucose sensing. Using a classical least-squares algorithm and a spectral range between 2180 and 2312 nm, we extracted clinically relevant glucose concentrations in multicomponent solutions containing bovine serum albumin, triacetin, lactate, and urea.
The influence of extraction procedure on ion concentrations in sediment pore water
Winger, P.V.; Lasier, P.J.; Jackson, B.P.
1998-01-01
Sediment pore water has the potential to yield important information on sediment quality, but the influence of isolation procedures on the chemistry and toxicity are not completely known and consensus on methods used for the isolation from sediment has not been reached. To provide additional insight into the influence of collection procedures on pore water chemistry, anion (filtered only) and cation concentrations were measured in filtered and unfiltered pore water isolated from four sediments using three different procedures: dialysis, centrifugation and vacuum. Peepers were constructed using 24-cell culture plates and cellulose membranes, and vacuum extractors consisted of fused-glass air stones attached with airline tubing to 60cc syringes. Centrifugation was accomplished at two speeds (2,500 and 10,000 x g) for 30 min in a refrigerated centrifuge maintained at 4?C. Only minor differences in chemical characteristics and cation and anion concentrations were found among the different collecting methods with differences being sediment specific. Filtering of the pore water did not appreciably reduce major cation concentrations, but trace metals (Cu and Pb) were markedly reduced. Although the extraction methods evaluated produced pore waters of similar chemistries, the vacuum extractor provided the following advantages over the other methods: (1) ease of extraction, (2) volumes of pore water isolated, (3) minimal preparation time and (4) least time required for extraction of pore water from multiple samples at one time.
Local Estimators for Spacecraft Formation Flying
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh
2011-01-01
A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.
Gülay, Arda; Smets, Barth F
2015-09-01
Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.
Harford, Joe B; Otero, Isabel V; Anderson, Benjamin O; Cazap, Eduardo; Gradishar, William J; Gralow, Julie R; Kane, Gabrielle M; Niëns, Laurens M; Porter, Peggy L; Reeler, Anne V; Rieger, Paula T; Shockney, Lillie D; Shulman, Lawrence N; Soldak, Tanya; Thomas, David B; Thompson, Beti; Winchester, David P; Zelle, Sten G; Badwe, Rajendra A
2011-04-01
International collaborations like the Breast Health Global Initiative (BHGI) can help low and middle income countries (LMCs) to establish or improve breast cancer control programs by providing evidence-based, resource-stratified guidelines for the management and control of breast cancer. The Problem Solving Working Group of the BHGI 2010 Global Summit met to develop a consensus statement on problem-solving strategies addressing breast cancer in LMCs. To better assess breast cancer burden in poorly studied populations, countries require accurate statistics regarding breast cancer incidence and mortality. To better identify health care system strengths and weaknesses, countries require reasonable indicators of true health system quality and capacity. Using qualitative and quantitative research methods, countries should formulate cancer control strategies to identify both system inefficiencies and patient barriers. Patient navigation programs linked to public advocacy efforts feed and strengthen functional early detection and treatment programs. Cost-effectiveness research and implementation science are tools that can guide and expand successful pilot programs. Copyright © 2011 Elsevier Ltd. All rights reserved.
Randelli, F; Romanini, E; Biggi, F; Danelli, G; Della Rocca, G; Laurora, N R; Imberti, D; Palareti, G; Prisco, D
2013-03-01
Pharmacological prophylaxis for preventing venous thromboembolism (VTE) is a worldwide established procedure in hip and knee replacement surgery, as well as in the treatment of femoral neck fractures, but few data exist in other fields of orthopaedics and traumatology. Thus, no guidelines or recommendations are available in the literature except for a limited number of weak statements about knee arthroscopy and lower limb fractures. In any case, none of them are a multidisciplinary effort as the one here presented. The Italian Society for Studies on Haemostasis and Thrombosis (SISET), the Italian Society of Orthopaedics and Traumatology (SIOT), the Association of Orthopaedic Traumatology of Italian Hospitals (OTODI), together with the Italian Society of Anesthesia, Analgesia, Resuscitation and Intensive Care (SIAARTI) and the Italian Society of General Medicine (SIMG) have set down easy and quick suggestions for VTE prophylaxis in a number of surgical conditions for which only scarce evidence is available. This inter-society consensus statement aims at simplifying the approach to VTE prophylaxis in the single patient with the goal to improve its clinical application.
Hydro-climate and ecological behaviour of the drought of Amazonia in 2005.
Marengo, J A; Nobre, C A; Tomasella, J; Cardoso, M F; Oyama, M D
2008-05-27
In 2005, southwestern Amazonia experienced the effects of an intense drought that affected life and biodiversity. Several major tributaries as well as parts of the main river itself contained only a fraction of their normal volumes of water, and lakes were drying up. The consequences for local people, animals and the forest itself are impossible to estimate now, but they are likely to be serious. The analyses indicate that the drought was manifested as weak peak river season during autumn to winter as a consequence of a weak summertime season in southwestern Amazonia; the winter season was also accompanied by rainfall that sometimes reached 25% of the climatic value, being anomalously warm and dry and helping in the propagation of fires. Analyses of climatic and hydrological records in Amazonia suggest a broad consensus that the 2005 drought was linked not to El Niño as with most previous droughts in the Amazon, but to warming sea surface temperatures in the tropical North Atlantic Ocean.
NASA Astrophysics Data System (ADS)
Kim, Taehwan; Kim, Sungho
2017-02-01
This paper presents a novel method to detect the remote pedestrians. After producing the human temperature based brightness enhancement image using the temperature data input, we generates the regions of interest (ROIs) by the multiscale contrast filtering based approach including the biased hysteresis threshold and clustering, remote pedestrian's height, pixel area and central position information. Afterwards, we conduct local vertical and horizontal projection based ROI refinement and weak aspect ratio based ROI limitation to solve the problem of region expansion in the contrast filtering stage. Finally, we detect the remote pedestrians by validating the final ROIs using transfer learning with convolutional neural network (CNN) feature, following non-maximal suppression (NMS) with strong aspect ratio limitation to improve the detection performance. In the experimental results, we confirmed that the proposed contrast filtering and locally projected region based CNN (CFLP-CNN) outperforms the baseline method by 8% in term of logaveraged miss rate. Also, the proposed method is more effective than the baseline approach and the proposed method provides the better regions that are suitably adjusted to the shape and appearance of remote pedestrians, which makes it detect the pedestrian that didn't find in the baseline approach and are able to help detect pedestrians by splitting the people group into a person.
WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering.
Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin
2012-04-01
Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on (15)N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa.
WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering
Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin
2012-01-01
Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. Availability: WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. Contact: statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa PMID:22328784
Castelnuovo, Gianluca; Giusti, Emanuele Maria; Manzoni, Gian Mauro; Saviola, Donatella; Gabrielli, Samantha; Lacerenza, Marco; Pietrabissa, Giada; Cattivelli, Roberto; Spatola, Chiara Anna Maria; Rossi, Alessandro; Varallo, Giorgia; Novelli, Margherita; Villa, Valentina; Luzzati, Francesca; Cottini, Andrea; Lai, Carlo; Volpato, Eleonora; Cavalera, Cesare; Pagnini, Francesco; Tesio, Valentina; Castelli, Lorys; Tavola, Mario; Torta, Riccardo; Arreghini, Marco; Zanini, Loredana; Brunani, Amelia; Seitanidis, Ionathan; Ventura, Giuseppe; Capodaglio, Paolo; D'Aniello, Guido Edoardo; Scarpina, Federica; Brioschi, Andrea; Bigoni, Matteo; Priano, Lorenzo; Mauro, Alessandro; Riva, Giuseppe; Di Lernia, Daniele; Repetto, Claudia; Regalia, Camillo; Molinari, Enrico; Notaro, Paolo; Paolucci, Stefano; Sandrini, Giorgio; Simpson, Susan; Wiederhold, Brenda Kay; Gaudio, Santino; Jackson, Jeffrey B; Tamburin, Stefano; Benedetti, Fabrizio
2018-01-01
It is increasingly acknowledged that the outcomes of medical treatments are influenced by the context of the clinical encounter through the mechanisms of the placebo effect. The phenomenon of placebo analgesia might be exploited to maximize the efficacy of neurorehabilitation treatments. Since its intensity varies across neurological disorders, the Italian Consensus Conference on Pain in Neurorehabilitation (ICCP) summarized the studies on this field to provide guidance on its use. A review of the existing reviews and meta-analyses was performed to assess the magnitude of the placebo effect in disorders that may undergo neurorehabilitation treatment. The search was performed on Pubmed using placebo, pain, and the names of neurological disorders as keywords. Methodological quality was assessed using a pre-existing checklist. Data about the magnitude of the placebo effect were extracted from the included reviews and were commented in a narrative form. 11 articles were included in this review. Placebo treatments showed weak effects in central neuropathic pain (pain reduction from 0.44 to 0.66 on a 0-10 scale) and moderate effects in postherpetic neuralgia (1.16), in diabetic peripheral neuropathy (1.45), and in pain associated to HIV (1.82). Moderate effects were also found on pain due to fibromyalgia and migraine; only weak short-term effects were found in complex regional pain syndrome. Confounding variables might have influenced these results. These estimates should be interpreted with caution, but underscore that the placebo effect can be exploited in neurorehabilitation programs. It is not necessary to conceal its use from the patient. Knowledge of placebo mechanisms can be used to shape the doctor-patient relationship, to reduce the use of analgesic drugs and to train the patient to become an active agent of the therapy.
Sarcopenia and Health Care Utilization in Older Women
Lui, Li-Yung; McCulloch, Charles E.; Cauley, Jane A.; Paudel, Misti L.; Taylor, Brent; Schousboe, John T.; Ensrud, Kristine E.
2017-01-01
Background: Although there are several consensus definitions of sarcopenia, their association with health care utilization has not been studied. Methods: We included women from the prospective Study of Osteoporotic Fractures with complete assessment of sarcopenia by several definitions at the Study of Osteoporotic Fractures Year 10 (Y10) exam (1997–1998) who also had available data from Medicare Fee- For-Service Claims (N = 566) or Kaiser Encounter data (N = 194). Sarcopenia definitions evaluated were: International Working Group, European Working Group for Sarcopenia in Older Persons, Foundation for the NIH Sarcopenia Project, Baumgartner, and Newman. Hurdle models and logistic regression were used to assess the relation between sarcopenia status (the summary definition and the components of slowness, weakness and/or lean mass) and outcomes that included hospitalizations, cumulative inpatient days/year, short-term (part A paid) skilled nursing facility stay in the 3 years following the Y10 visit. Results: None of the consensus definitions, nor the definition components of weakness or low lean mass, was associated with increased risk of hospitalization or greater likelihood of short-term skilled nursing facility stay. Women with slowness by any criterion definition were about 50% more likely to be hospitalized; had a greater rate of hospitalization days amongst those hospitalized; and had 1.8 to 2.1 times greater likelihood of a short-term skilled nursing facility stay than women without slowness. There was the suggestion of a protective association of low lean mass by the various criterion definitions on short-term skilled nursing facility stay. Conclusion: Estimated effects of sarcopenia on health care utilization were negligible. However, slowness was associated with greater health care utilization. PMID:27402050
A New Adaptive Framework for Collaborative Filtering Prediction
Almosallam, Ibrahim A.; Shang, Yi
2010-01-01
Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924
A New Adaptive Framework for Collaborative Filtering Prediction.
Almosallam, Ibrahim A; Shang, Yi
2008-06-01
Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.
Flame-conditioned turbulence modeling for reacting flows
NASA Astrophysics Data System (ADS)
Macart, Jonathan F.; Mueller, Michael E.
2017-11-01
Conventional approaches to turbulence modeling in reacting flows rely on unconditional averaging or filtering, that is, consideration of the momentum equations only in physical space, implicitly assuming that the flame only weakly affects the turbulence, aside from a variation in density. Conversely, for scalars, which are strongly coupled to the flame structure, their evolution equations are often projected onto a reduced-order manifold, that is, conditionally averaged or filtered, on a flame variable such as a mixture fraction or progress variable. Such approaches include Conditional Moment Closure (CMC) and related variants. However, recent observations from Direct Numerical Simulation (DNS) have indicated that the flame can strongly affect turbulence in premixed combustion at low Karlovitz number. In this work, a new approach to turbulence modeling for reacting flows is investigated in which conditionally averaged or filtered equations are evolved for the momentum. The conditionally-averaged equations for the velocity and its covariances are derived, and budgets are evaluated from DNS databases of turbulent premixed planar jet flames. The most important terms in these equations are identified, and preliminary closure models are proposed.
Transmission properties of one-dimensional ternary plasma photonic crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiveshwari, Laxmi; Awasthi, S. K.
2015-09-15
Omnidirectional photonic band gaps (PBGs) are found in one-dimensional ternary plasma photonic crystals (PPC) composed of single negative metamaterials. The band characteristics and transmission properties are investigated through the transfer matrix method. We show that the proposed structure can trap light in three-dimensional space due to the elimination of Brewster's angle transmission resonance allowing the existence of complete PBG. The results are discussed in terms of incident angle, layer thickness, dielectric constant of the dielectric material, and number of unit cells (N) for TE and TM polarizations. It is seen that PBG characteristics is apparent even in an N ≥ 2 system,more » which is weakly sensitive to the incident angle and completely insensitive to the polarization. Finite PPC could be used for multichannel transmission filter without introducing any defect in the geometry. We show that the locations of the multichannel transmission peaks are in the allowed band of the infinite structure. The structure can work as a single or multichannel filter by varying the number of unit cells. Binary PPC can also work as a polarization sensitive tunable filter.« less
Shulman, Stanley A.; Brisson, Michael J.; Howe, Alan M.
2015-01-01
Inductively coupled plasma mass spectrometry (ICP-MS) is becoming more widely used for trace elemental analysis in the occupational hygiene field, and consequently new ICP-MS international standard procedures have been promulgated by ASTM International and ISO. However, there is a dearth of interlaboratory performance data for this analytical methodology. In an effort to fill this data void, an interlaboratory evaluation of ICP-MS for determining trace elements in workplace air samples was conducted, towards fulfillment of method validation requirements for international voluntary consensus standard test methods. The study was performed in accordance with applicable statistical procedures for investigating interlaboratory precision. The evaluation was carried out using certified 37-mm diameter mixed-cellulose ester (MCE) filters that were fortified with 21 elements of concern in occupational hygiene. Elements were spiked at levels ranging from 0.025 to 10 μg filter−1, with three different filter loadings denoted “Low”, “Medium” and “High”. Participating laboratories were recruited from a pool of over fifty invitees; ultimately twenty laboratories from Europe, North America and Asia submitted results. Triplicates of each certified filter with elemental contents at three different levels, plus media blanks spiked with reagent, were conveyed to each volunteer laboratory. Each participant was also provided a copy of the test method which each participant was asked to follow; spiking levels were unknown to the participants. The laboratories were requested to prepare the filters by one of three sample preparation procedures, i.e., hotplate digestion, microwave digestion or hot block extraction, which were described in the test method. Participants were then asked to analyze aliquots of the prepared samples by ICP-MS, and to report their data in units of μg filter−1. Most interlaboratory precision estimates were acceptable for medium- and high-level spikes (RSD <25%), but generally yielded greater uncertainties than were anticipated at the outset of the study. PMID:22038017
Definition and classification of negative motor signs in childhood.
Sanger, Terence D; Chen, Daofen; Delgado, Mauricio R; Gaebler-Spira, Deborah; Hallett, Mark; Mink, Jonathan W
2006-11-01
In this report we describe the outcome of a consensus meeting that occurred at the National Institutes of Health in Bethesda, Maryland, March 12 through 14, 2005. The meeting brought together 39 specialists from multiple clinical and research disciplines including developmental pediatrics, neurology, neurosurgery, orthopedic surgery, physical therapy, occupational therapy, physical medicine and rehabilitation, neurophysiology, muscle physiology, motor control, and biomechanics. The purpose of the meeting was to establish terminology and definitions for 4 aspects of motor disorders that occur in children: weakness, reduced selective motor control, ataxia, and deficits of praxis. The purpose of the definitions is to assist communication between clinicians, select homogeneous groups of children for clinical research trials, facilitate the development of rating scales to assess improvement or deterioration with time, and eventually to better match individual children with specific therapies. "Weakness" is defined as the inability to generate normal voluntary force in a muscle or normal voluntary torque about a joint. "Reduced selective motor control" is defined as the impaired ability to isolate the activation of muscles in a selected pattern in response to demands of a voluntary posture or movement. "Ataxia" is defined as an inability to generate a normal or expected voluntary movement trajectory that cannot be attributed to weakness or involuntary muscle activity about the affected joints. "Apraxia" is defined as an impairment in the ability to accomplish previously learned and performed complex motor actions that is not explained by ataxia, reduced selective motor control, weakness, or involuntary motor activity. "Developmental dyspraxia" is defined as a failure to have ever acquired the ability to perform age-appropriate complex motor actions that is not explained by the presence of inadequate demonstration or practice, ataxia, reduced selective motor control, weakness, or involuntary motor activity.
Abramovitch, Amitai; Mittelman, Andrew; Tankersley, Amelia P; Abramowitz, Jonathan S; Schweiger, Avraham
2015-07-30
The inconsistent nature of the neuropsychology literature pertaining to obsessive-compulsive disorder (OCD) has long been recognized. However, individual studies, systematic reviews, and recent meta-analytic reviews were unsuccessful in establishing a consensus regarding a disorder-specific neuropsychological profile. In an attempt to identify methodological factors that may contribute to the inconsistency that is characteristic of this body of research, a systematic review of methodological factors in studies comparing OCD patients and non-psychiatric controls on neuropsychological tests was conducted. This review covered 115 studies that included nearly 3500 patients. Results revealed a range of methodological weaknesses. Some of these weaknesses have been previously noted in the broader neuropsychological literature, while some are more specific to psychiatric disorders, and to OCD. These methodological shortcomings have the potential to hinder the identification of a specific neuropsychological profile associated with OCD as well as to obscure the association between neurocognitive dysfunctions and contemporary neurobiological models. Rectifying these weaknesses may facilitate replicability, and promote our ability to extract cogent, meaningful, and more unified inferences regarding the neuropsychology of OCD. To that end, we present a set of methodological recommendations to facilitate future neuropsychology research in psychiatric disorders in general, and in OCD in particular. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An image mosaic method based on corner
NASA Astrophysics Data System (ADS)
Jiang, Zetao; Nie, Heting
2015-08-01
In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.
PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement.
McGowan, Jessie; Sampson, Margaret; Salzwedel, Douglas M; Cogo, Elise; Foerster, Vicki; Lefebvre, Carol
2016-07-01
To develop an evidence-based guideline for Peer Review of Electronic Search Strategies (PRESS) for systematic reviews (SRs), health technology assessments, and other evidence syntheses. An SR, Web-based survey of experts, and consensus development forum were undertaken to identify checklists that evaluated or validated electronic literature search strategies and to determine which of their elements related to search quality or errors. Systematic review: No new search elements were identified for addition to the existing (2008-2010) PRESS 2015 Evidence-Based Checklist, and there was no evidence refuting any of its elements. Results suggested that structured PRESS could identify search errors and improve the selection of search terms. Web-based survey of experts: Most respondents felt that peer review should be undertaken after the MEDLINE search had been prepared but before it had been translated to other databases. Consensus development forum: Of the seven original PRESS elements, six were retained: translation of the research question; Boolean and proximity operators; subject headings; text word search; spelling, syntax and line numbers; and limits and filters. The seventh (skilled translation of the search strategy to additional databases) was removed, as there was consensus that this should be left to the discretion of searchers. An updated PRESS 2015 Guideline Statement was developed, which includes the following four documents: PRESS 2015 Evidence-Based Checklist, PRESS 2015 Recommendations for Librarian Practice, PRESS 2015 Implementation Strategies, and PRESS 2015 Guideline Assessment Form. The PRESS 2015 Guideline Statement should help to guide and improve the peer review of electronic literature search strategies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Xiao, Ying; Kry, Stephen F; Popple, Richard; Yorke, Ellen; Papanikolaou, Niko; Stathakis, Sotirios; Xia, Ping; Huq, Saiful; Bayouth, John; Galvin, James; Yin, Fang-Fang
2015-05-08
This report describes the current state of flattening filter-free (FFF) radiotherapy beams implemented on conventional linear accelerators, and is aimed primarily at practicing medical physicists. The Therapy Emerging Technology Assessment Work Group of the American Association of Physicists in Medicine (AAPM) formed a writing group to assess FFF technology. The published literature on FFF technology was reviewed, along with technical specifications provided by vendors. Based on this information, supplemented by the clinical experience of the group members, consensus guidelines and recommendations for implementation of FFF technology were developed. Areas in need of further investigation were identified. Removing the flattening filter increases beam intensity, especially near the central axis. Increased intensity reduces treatment time, especially for high-dose stereotactic radiotherapy/radiosurgery (SRT/SRS). Furthermore, removing the flattening filter reduces out-of-field dose and improves beam modeling accuracy. FFF beams are advantageous for small field (e.g., SRS) treatments and are appropriate for intensity-modulated radiotherapy (IMRT). For conventional 3D radiotherapy of large targets, FFF beams may be disadvantageous compared to flattened beams because of the heterogeneity of FFF beam across the target (unless modulation is employed). For any application, the nonflat beam characteristics and substantially higher dose rates require consideration during the commissioning and quality assurance processes relative to flattened beams, and the appropriate clinical use of the technology needs to be identified. Consideration also needs to be given to these unique characteristics when undertaking facility planning. Several areas still warrant further research and development. Recommendations pertinent to FFF technology, including acceptance testing, commissioning, quality assurance, radiation safety, and facility planning, are presented. Examples of clinical applications are provided. Several of the areas in which future research and development are needed are also indicated.
NASA Astrophysics Data System (ADS)
Gonzalez, Pablo J.
2017-04-01
Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033
Applications of high-dimensional photonic entaglement
NASA Astrophysics Data System (ADS)
Broadbent, Curtis J.
This thesis presents the results of four experiments related to applications of higher dimensional photonic entanglement. (1) We use energy-time entangled biphotons from spontaneous parametric down-conversion (SPDC) to implement a large-alphabet quantum key distribution (QKD) system which securely transmits up to 10 bits of the random key per photon. An advantage over binary alphabet QKD is demonstrated for quantum channels with a single-photon transmission-rate ceiling. The security of the QKD system is based on the measurable reduction of entanglement in the presence of eavesdropping. (2) We demonstrate the preservation of energy-time entanglement in a tunable slow-light medium. The fine-structure resonances of a hot Rubidium vapor are used to slow one photon from an energy-time entangled biphoton generated with non-degenerate SPDC. The slow-light medium is placed in one arm of a Franson interferometer. The observed Franson fringes witness the presence of entanglement and quantify a delay of 1.3 biphoton correlation lengths. (3) We utilize holograms to discriminate between two spatially-coherent single-photon images. Heralded single photons are created with degenerate SPDC and sent through one of two transmission masks to make single-photon images with no spatial overlap. The single-photon images are sent through a previously prepared holographic filter. The filter discriminates the single-photon images with an average confidence level of 95%. (4) We employ polarization entangled biphotons generated from non-collinear SPDC to violate a generalized Leggett-Garg inequality with non-local weak measurements. The weak measurement is implemented with Fresnel reflection of a microscope coverslip on one member of the entangled biphoton. Projective measurement with computer-controlled polarizers on the entangled state after the weak measurement yields a joint probability with three degrees of freedom. Contextual values are then used to determine statistical averages of measurement operations from the joint probability. Correlations between the measured averages are shown to violate the upper bound of three distinct two-object Leggett-Garg inequalities derived from assumptions of macro-realism. A relationship between the violation of two-object Leggett-Garg inequalities and strange non-local weak values is derived and experimentally demonstrated.
Optical design of the lightning imager for MTG
NASA Astrophysics Data System (ADS)
Lorenzini, S.; Bardazzi, R.; Di Giampietro, M.; Feresin, F.; Taccola, M.; Cuevas, L. P.
2017-11-01
The Lightning Imager for Meteosat Third Generation is an optical payload with on-board data processing for the detection of lightning. The instrument will provide a global monitoring of lightning events over the full Earth disk from geostationary orbit and will operate in day and night conditions. The requirements of the large field of view together with the high detection efficiency with small and weak optical pulses superimposed to a much brighter and highly spatial and temporal variable background (full operation during day and night conditions, seasonal variations and different albedos between clouds oceans and lands) are driving the design of the optical instrument. The main challenge is to distinguish a true lightning from false events generated by random noise (e.g. background shot noise) or sun glints diffusion or signal variations originated by microvibrations. This can be achieved thanks to a `multi-dimensional' filtering, simultaneously working on the spectral, spatial and temporal domains. The spectral filtering is achieved with a very narrowband filter centred on the bright lightning O2 triplet line (777.4 nm +/- 0.17 nm). The spatial filtering is achieved with a ground sampling distance significantly smaller (between 4 and 5 km at sub satellite pointing) than the dimensions of a typical lightning pulse. The temporal filtering is achieved by sampling continuously the Earth disk within a period close to 1 ms. This paper presents the status of the optical design addressing the trade-off between different configurations and detailing the design and the analyses of the current baseline. Emphasis is given to the discussion of the design drivers and the solutions implemented in particular concerning the spectral filtering and the optimisation of the signal to noise ratio.
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
Colour, albedo and nucleus size of Halley's comet
NASA Technical Reports Server (NTRS)
Cruikshank, D. P.; Tholen, D. J.; Hartmann, W. K.
1985-01-01
Photometry of Halley's comet in the B, J, V, and K broadband filters during a time when the coma was very weak and presumed to contribute negligibly to the broadband photometry is reported. The V-J and J-K colors suggest that the color of the nucleus of Halley's comet is similar to that of the D-type asteroids, which in turn suggests that the surface of the nucleus has an albedo less than 0.1.
Cheng, Rui; Xia, Li; Sima, Chaotan; Ran, Yanli; Rohollahnejad, Jalal; Zhou, Jiaao; Wen, Yongqiang; Yu, Can
2016-02-08
Ultrashort fiber Bragg gratings (US-FBGs) have significant potential as weak grating sensors for distributed sensing, but the exploitation have been limited by their inherent broad spectra that are undesirable for most traditional wavelength measurements. To address this, we have recently introduced a new interrogation concept using shifted optical Gaussian filters (SOGF) which is well suitable for US-FBG measurements. Here, we apply it to demonstrate, for the first time, an US-FBG-based self-referencing distributed optical sensing technique, with the advantages of adjustable sensitivity and range, high-speed and wide-range (potentially >14000 με) intensity-based detection, and resistance to disturbance by nonuniform parameter distribution. The entire system is essentially based on a microwave network, which incorporates the SOGF with a fiber delay-line between the two arms. Differential detections of the cascaded US-FBGs are performed individually in the network time-domain response which can be obtained by analyzing its complex frequency response. Experimental results are presented and discussed using eight cascaded US-FBGs. A comprehensive numerical analysis is also conducted to assess the system performance, which shows that the use of US-FBGs instead of conventional weak FBGs could significantly improve the power budget and capacity of the distributed sensing system while maintaining the crosstalk level and intensity decay rate, providing a promising route for future sensing applications.
The extent and consequences of p-hacking in science.
Head, Megan L; Holman, Luke; Lanfear, Rob; Kahn, Andrew T; Jennions, Michael D
2015-03-01
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as "p-hacking," occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
The rise of agrarian capitalism and the decline of family farming in England.
Shaw-Taylor, Leigh
2012-01-01
Historians have documented rising farm sizes throughout the period 1450–1850. Existing studies have revealed much about the mechanisms underlying the development of agrarian capitalism. However, we currently lack any consensus as to when the critical developments occurred. This is largely due to the absence of sufficiently large and geographically wide-ranging datasets but is also attributable to conceptual weaknesses in much of the literature. This article develops a new approach to the problem and argues that agrarian capitalism was dominant in southern and eastern England by 1700 but that in northern England the critical developments came later.
N-Glycopeptide Profiling in Arabidopsis Inflorescence
Xu, Shou-Ling; Medzihradszky, Katalin F.; Wang, Zhi-Yong; ...
2016-04-11
This study presents the first large scale analysis of plant intact glycopeptides. Using wheat germ agglutinin lectin weak affinity chromatography to enrich modified peptides, followed by ETD fragmentation tandem mass spectrometry, glycan compositions on over 1100 glycopeptides from 270 proteins found in Arabidopsis inflorescence tissue were characterized. While some sites were only detected with a single glycan attached, others displayed up to 16 different glycoforms. Among the identified glycopeptides were four modified in non-consensus glycosylation motifs. Finally, while most of the modified proteins are secreted, membrane, ER or Golgi localized proteins, surprisingly N-linked sugars were detected on a protein predictedmore » to be cytosolic or nuclear.« less
Valuing Climate Change Impacts on Human Health: Empirical Evidence from the Literature
Markandya, Anil; Chiabai, Aline
2009-01-01
There is a broad consensus that climate change will increase the costs arising from diseases such as malaria and diarrhea and, furthermore, that the largest increases will be in developing countries. One of the problems is the lack of studies measuring these costs systematically and in detail. This paper critically reviews a number of studies about the costs of planned adaptation in the health context, and compares current health expenditures with MDGs which are felt to be inadequate when considering climate change impacts. The analysis serves also as a critical investigation of the methodologies used and aims at identifying research weaknesses and gaps. PMID:19440414
Dark Energy Survey Year 1 results: curved-sky weak lensing mass map
NASA Astrophysics Data System (ADS)
Chang, C.; Pujol, A.; Mawdsley, B.; Bacon, D.; Elvin-Poole, J.; Melchior, P.; Kovács, A.; Jain, B.; Leistedt, B.; Giannantonio, T.; Alarcon, A.; Baxter, E.; Bechtol, K.; Becker, M. R.; Benoit-Lévy, A.; Bernstein, G. M.; Bonnett, C.; Busha, M. T.; Rosell, A. Carnero; Castander, F. J.; Cawthon, R.; da Costa, L. N.; Davis, C.; De Vicente, J.; DeRose, J.; Drlica-Wagner, A.; Fosalba, P.; Gatti, M.; Gaztanaga, E.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Hoyle, B.; Huff, E. M.; Jarvis, M.; Jeffrey, N.; Kacprzak, T.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Prat, J.; Rau, M. M.; Rollins, R. P.; Roodman, A.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sánchez, C.; Sevilla-Noarbe, I.; Sheldon, E.; Troxel, M. A.; Varga, T. N.; Vielzeuf, P.; Vikram, V.; Wechsler, R. H.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Estrada, J.; Neto, A. Fausti; Fernandez, E.; Flaugher, B.; Frieman, J.; García-Bellido, J.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kent, S.; Kirk, D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Martini, P.; Menanteau, F.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Petravick, D.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Tucker, D. L.; Walker, A. R.; Wester, W.; Zhang, Y.
2018-04-01
We construct the largest curved-sky galaxy weak lensing mass map to date from the DES first-year (DES Y1) data. The map, about 10 times larger than the previous work, is constructed over a contiguous ≈1500 deg2, covering a comoving volume of ≈10 Gpc3. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogues, METACALIBRATION and IM3SHAPE, with sources at redshift 0.2 < z < 1.3, and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal to noise in the E-mode map and the B-mode map is ˜1.5 (˜2) when smoothed with a Gaussian filter of σG = 30 (80) arcmin. The second and third moments of the convergence κ in the maps are in agreement with simulations. We also find no significant correlation of κ with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation
Bernardes, Juliana; Zaverucha, Gerson; Vaquero, Catherine; Carbone, Alessandra
2016-01-01
Traditional protein annotation methods describe known domains with probabilistic models representing consensus among homologous domain sequences. However, when relevant signals become too weak to be identified by a global consensus, attempts for annotation fail. Here we address the fundamental question of domain identification for highly divergent proteins. By using high performance computing, we demonstrate that the limits of state-of-the-art annotation methods can be bypassed. We design a new strategy based on the observation that many structural and functional protein constraints are not globally conserved through all species but might be locally conserved in separate clades. We propose a novel exploitation of the large amount of data available: 1. for each known protein domain, several probabilistic clade-centered models are constructed from a large and differentiated panel of homologous sequences, 2. a decision-making protocol combines outcomes obtained from multiple models, 3. a multi-criteria optimization algorithm finds the most likely protein architecture. The method is evaluated for domain and architecture prediction over several datasets and statistical testing hypotheses. Its performance is compared against HMMScan and HHblits, two widely used search methods based on sequence-profile and profile-profile comparison. Due to their closeness to actual protein sequences, clade-centered models are shown to be more specific and functionally predictive than the broadly used consensus models. Based on them, we improved annotation of Plasmodium falciparum protein sequences on a scale not previously possible. We successfully predict at least one domain for 72% of P. falciparum proteins against 63% achieved previously, corresponding to 30% of improvement over the total number of Pfam domain predictions on the whole genome. The method is applicable to any genome and opens new avenues to tackle evolutionary questions such as the reconstruction of ancient domain duplications, the reconstruction of the history of protein architectures, and the estimation of protein domain age. Website and software: http://www.lcqb.upmc.fr/CLADE. PMID:27472895
NASA Astrophysics Data System (ADS)
George, Daniel; Huerta, E. A.
2018-03-01
The recent Nobel-prize-winning detections of gravitational waves from merging black holes and the subsequent detection of the collision of two neutron stars in coincidence with electromagnetic observations have inaugurated a new era of multimessenger astrophysics. To enhance the scope of this emergent field of science, we pioneered the use of deep learning with convolutional neural networks, that take time-series inputs, for rapid detection and characterization of gravitational wave signals. This approach, Deep Filtering, was initially demonstrated using simulated LIGO noise. In this article, we present the extension of Deep Filtering using real data from LIGO, for both detection and parameter estimation of gravitational waves from binary black hole mergers using continuous data streams from multiple LIGO detectors. We demonstrate for the first time that machine learning can detect and estimate the true parameters of real events observed by LIGO. Our results show that Deep Filtering achieves similar sensitivities and lower errors compared to matched-filtering while being far more computationally efficient and more resilient to glitches, allowing real-time processing of weak time-series signals in non-stationary non-Gaussian noise with minimal resources, and also enables the detection of new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. This unified framework for data analysis is ideally suited to enable coincident detection campaigns of gravitational waves and their multimessenger counterparts in real-time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troia, Matthew J.; Gido, Keith B.
Trade-offs among functional traits produce multi-trait strategies that shape species interactions with the environment and drive the assembly of local communities from regional species pools. Stream fish communities vary along stream size gradients and among hierarchically structured habitat patches, but little is known about how the dispersion of strategies varies along environmental gradients and across spatial scales. We used null models to quantify the dispersion of reproductive life history, feeding, and locomotion strategies in communities sampled at three spatial scales in a prairie stream network in Kansas, USA. Strategies were generally underdispersed at all spatial scales, corroborating the longstanding notionmore » of abiotic filtering in stream fish communities. We tested for variation in strategy dispersion along a gradient of stream size and between headwater streams draining different ecoregions. Reproductive life history strategies became increasingly underdispersed moving from downstream to upstream, suggesting that abiotic filtering is stronger in headwaters. This pattern was stronger among reaches compared to mesohabitats, supporting the premise that differences in hydrologic regime among reaches filter reproductive life history strategies. Feeding strategies became increasingly underdispersed moving from upstream to downstream, indicating that environmental filters associated with stream size affect the dispersion of feeding and reproductive life history in opposing ways. Weak differences in strategy dispersion were detected between ecoregions, suggesting that different abiotic filters or strategies drive community differences between ecoregions. Lastly, given the pervasiveness of multi-trait strategies in plant and animal communities, we conclude that the assessment of strategy dispersion offers a comprehensive approach for elucidating mechanisms of community assembly.« less
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-07-01
X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.
Metrology with Weak Value Amplification and Related Topics
2013-10-12
sensitivity depend crucially on the relative time scales involved, which include: 4 +- PBS PC HWP SBC Piezo Pulsed Laser Split Detector 50:50 FIG. 1. Simple...reasons why this may be impossible or inadvisable given a laboratory set-up. There may be a minimum quiet time between laser pulses, for example, or...measurements is a full 100 ms, our filtering limits the laser noise to time scales of about 30 ms. For analysis, we take this as our integration time in
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornejo, Juan Carlos
The Standard Model has been a theory with the greatest success in describing the fundamental interactions of particles. As of the writing of this dissertation, the Standard Model has not been shown to make a false prediction. However, the limitations of the Standard Model have long been suspected by its lack of a description of gravity, nor dark matter. Its largest challenge to date, has been the observation of neutrino oscillations, and the implication that they may not be massless, as required by the Standard Model. The growing consensus is that the Standard Model is simply a lower energy effectivemore » field theory, and that new physics lies at much higher energies. The Q weak Experiment is testing the Electroweak theory of the Standard Model by making a precise determination of the weak charge of the proton (Q p w). Any signs of "new physics" will appear as a deviation to the Standard Model prediction. The weak charge is determined via a precise measurement of the parity-violating asymmetry of the electron-proton interaction via elastic scattering of a longitudinally polarized electron beam of an un-polarized proton target. The experiment required that the electron beam polarization be measured to an absolute uncertainty of 1%. At this level the electron beam polarization was projected to contribute the single largest experimental uncertainty to the parity-violating asymmetry measurement. This dissertation will detail the use of Compton scattering to determine the electron beam polarization via the detection of the scattered photon. I will conclude the remainder of the dissertation with an independent analysis of the blinded Q weak.« less
Weak D caused by a founder deletion in the RHD gene.
Fichou, Yann; Chen, Jian-Min; Le Maréchal, Cédric; Jamet, Déborah; Dupont, Isabelle; Chuteau, Claude; Durousseau, Cécile; Loirat, Marie-Jeanne; Bailly, Pascal; Férec, Claude
2012-11-01
The RhD blood group system exemplifies a genotype-phenotype correlation by virtue of its highly polymorphic and immunogenic nature. Weak D phenotypes are generally thought to result from missense mutations leading to quantitative change of the D antigen in the red blood cell membrane or intracellularly. Different sets of polymerase chain reaction primers were designed to map and clone a deletion involving RHD Exon 10, which was found in approximately 3% of approximately 2000 RHD hemizygous subjects with D phenotype ambiguity. D antigen density was measured by flow cytometry. Transcript analysis was carried out by 3'-rapid amplification of complementary DNA ends. Haplotype analysis was performed by microsatellite genotyping. A 5405-bp deletion that removed nearly two-thirds of Intron 9 and almost all of Exon 10 of the RHD gene was characterized. It is predicted to result in the replacement of the last eight amino acids of the wild-type RhD protein by another four amino acids. The mean RhD antigen density from two deletion carriers was determined to be only 30. A consensus haplotype could be deduced from the deletion carriers based on the microsatellite genotyping data. The currently reported deletion was derived from a common founder. This deletion appears to represent not only the first large deletion associated with weak D but also the weakest of weak D alleles so far reported. This highly unusual genotype-phenotype relationship may be attributable to the additive effect of three distinct mechanisms that affect mRNA formation, mRNA stability, and RhD/ankyrin-R interaction, respectively. © 2012 American Association of Blood Banks.
Surgical Management of Degenerative Meniscus Lesions: The 2016 ESSKA Meniscus Consensus
Beaufils, P.; Becker, R.; Kopf, S.; Englund, M.; Verdonk, R.; Ollivier, M.; Seil, R.
2017-01-01
Purpose A degenerative meniscus lesion is a slowly developing process typically involving a horizontal cleavage in a middle-aged or older person. When the knee is symptomatic, arthroscopic partial meniscectomy has been practised for a long time with many case series reporting improved patient outcomes. Since 2002, several randomised clinical trials demonstrated no additional benefit of arthroscopic partial meniscectomy compared to non-operative treatment, sham surgery or sham arthroscopic partial meniscectomy. These results introduced controversy in the medical community and made clinical decision-making challenging in the daily clinical practice. To facilitate the clinical decision-making process, a consensus was developed. This initiative was endorsed by ESSKA. Methods A degenerative meniscus lesion was defined as a lesion occurring without any history of significant acute trauma in a patient older than 35 years. Congenital lesions, traumatic meniscus tears and degenerative lesions occurring in young patients, especially in athletes, were excluded. The project followed the so-called formal consensus process, involving a steering group, a rating group and a peer-review group. A total of 84 surgeons and scientists from 22 European countries were included in the process. Twenty questions, their associated answers and an algorithm based on extensive literature review and clinical expertise, were proposed. Each question and answer set was graded according to the scientific level of the corresponding literature. Results The main finding was that arthroscopic partial meniscectomy should not be proposed as a first line of treatment for degenerative meniscus lesions. Arthroscopic partial meniscectomy should only be considered after a proper standardised clinical and radiological evaluation and when the response to non-operative management has not been satisfactory. Magnetic resonance imaging of the knee is typically not indicated in the first-line work-up, but knee radiography should be used as an imaging tool to support a diagnosis of osteoarthritis or to detect certain rare pathologies, such as tumours or fractures of the knee. Discussion The present work offers a clear framework for the management of degenerative meniscus lesions, with the aim to balance information extracted from the scientific evidence and clinical expertise. Because of biases and weaknesses of the current literature and lack of definition of important criteria such as mechanical symptoms, it cannot be considered as an exact treatment algorithm. It summarises the results of the “ESSKA Meniscus Consensus Project” ( http://www.esska.org/education/projects ) and is the first official European consensus on this topic. The consensus may be updated and refined as more high-quality evidence emerges. Level of Evidence I. PMID:29114633
Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System
NASA Technical Reports Server (NTRS)
Lin, Tsung Han (Hank)
2011-01-01
JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.
Design of a lock-amplifier circuit
NASA Astrophysics Data System (ADS)
Liu, H.; Huang, W. J.; Song, X.; Zhang, W. Y.; Sa, L. B.
2017-01-01
The lock-in amplifier is recovered by phase sensitive detection technique for the weak signal submerged in the noise background. This design is based on the TI ultra low power LM358, INA129, OPA227, OP07 and other chips as the core design and production of the lock-in amplifier. Signal generator by 10m ohms /1K ohm resistance points pressure network 10 mu V 1mV adjustable sine wave signal s (T). The concomitant interference signal together through the AC amplifier and band-pass filter signal x (T), on the other hand reference signal R (T) driven by square wave phase shift etc. steps to get the signal R (T), two signals and by phase sensitive detector are a DC full wave, again through its low pass filter and a DC amplifier to be measured signal more accurate detection, the final circuit through the AD conversion and the use of single-chip will display the output.
Orbital angular momentum mode division filtering for photon-phonon coupling
Zhu, Zhi-Han; Sheng, Li-Wen; Lv, Zhi-Wei; He, Wei-Ming; Gao, Wei
2017-01-01
Stimulated Brillouin scattering (SBS), a fundamental nonlinear interaction between light and acoustic waves occurring in any transparency material, has been broadly studied for several decades and gained rapid progress in integrated photonics recently. However, the SBS noise arising from the unwanted coupling between photons and spontaneous non-coherent phonons in media is inevitable. Here, we propose and experimentally demonstrate this obstacle can be overcome via a method called orbital angular momentum mode division filtering. Owing to the introduction of a new distinguishable degree-of-freedom, even extremely weak signals can be discriminated and separated from a strong noise produced in SBS processes. The mechanism demonstrated in this proof-of-principle work provides a practical way for quasi-noise-free photonic-phononic operation, which is still valid in waveguides supporting multi-orthogonal spatial modes, permits more flexibility and robustness for future SBS devices. PMID:28071736
k-filtering applied to Cluster density measurements in the Solar Wind: Early findings
NASA Astrophysics Data System (ADS)
Jeska, Lauren; Roberts, Owen; Li, Xing
2014-05-01
Studies of solar wind turbulence indicate that a large proportion of the energy is Alfvénic (incompressible) at inertial scales. The properties of the turbulence found in the dissipation range are still under debate ~ while it is widely believed that kinetic Alfvén waves form the dominant component, the constituents of the remaining compressible turbulence are disputed. Using k-filtering, the power can be measured without assuming the validity of Taylor's hypothesis, and its distribution in (ω, k)-space can be determined to assist the identification of weak turbulence components. This technique is applied to Cluster electron density measurements and compared to the power in |B(t)|. As the direct electron density measurements from the WHISPER instrument have a low cadency of only 2.2s, proxy data derived from the spacecraft potential, measured every 0.2s by the EFW instrument, are used to extend this study to ion scales.
Physiotherapy for functional motor disorders: a consensus recommendation
Nielsen, Glenn; Stone, Jon; Matthews, Audrey; Brown, Melanie; Sparkes, Chris; Farmer, Ross; Masterton, Lindsay; Duncan, Linsey; Winters, Alisa; Daniell, Laura; Lumsden, Carrie; Carson, Alan; David, Anthony S; Edwards, Mark
2015-01-01
Background Patients with functional motor disorder (FMD) including weakness and paralysis are commonly referred to physiotherapists. There is growing evidence that physiotherapy is an effective treatment, but the existing literature has limited explanations of what physiotherapy should consist of and there are insufficient data to produce evidence-based guidelines. We aim to address this issue by presenting recommendations for physiotherapy treatment. Methods A meeting was held between physiotherapists, neurologists and neuropsychiatrists, all with extensive experience in treating FMD. A set of consensus recommendations were produced based on existing evidence and experience. Results We recommend that physiotherapy treatment is based on a biopsychosocial aetiological framework. Treatment should address illness beliefs, self-directed attention and abnormal habitual movement patterns through a process of education, movement retraining and self-management strategies within a positive and non-judgemental context. We provide specific examples of these strategies for different symptoms. Conclusions Physiotherapy has a key role in the multidisciplinary management of patients with FMD. There appear to be specific physiotherapy techniques which are useful in FMD and which are amenable to and require prospective evaluation. The processes involved in referral, treatment and discharge from physiotherapy should be considered carefully as a part of a treatment package. PMID:25433033
Arolt, V; Rothermundt, M; Peters, M; Leonard, B
2002-01-01
There is convincing evidence that cytokines are involved in the physiology and pathophysiology of brain function and interact with different neurotransmitter and neuroendocrine pathways. The possible involvement of the immune system in the neurobiological mechanisms that underlie psychiatric disorders has attracted increasing attention in recent years. Thus in the last decade, numerous clinical studies have demonstrated dysregulated immune functions in patients with psychiatric disorders. Such findings formed the basis of the 7th Expert Meeting on Psychiatry and Immunology in Muenster, Germany, where a consensus symposium was held to consider the strengths and weaknesses of current research in psychoneuroimmunology. Following a general overview of the field, the following topics were discussed: (1) methodological problems in laboratory procedures and recruitment of clinical samples; (2) the importance of pre-clinical research and animal models in psychiatric research; (3) the problem of statistical vs biological relevance. It was concluded that, despite a fruitful proliferation of research activities throughout the last decade, the continuous elaboration of methodological standards including the implementation of hypothesis-driven research represents a task that is likely to prove crucial for the future development of immunology research in clinical psychiatry.
Executive summary—Biomarkers of Nutrition for Development: Building a Consensus123
Namasté, Sorrel; Brabin, Bernard; Combs, Gerald; L'Abbe, Mary R; Wasantwisut, Emorn; Darnton-Hill, Ian
2011-01-01
The ability to develop evidence-based clinical guidance and effective programs and policies to achieve global health promotion and disease prevention goals depends on the availability of valid and reliable data. With specific regard to the role of food and nutrition in achieving those goals, relevant data are developed with the use of biomarkers that reflect nutrient exposure, status, and functional effect. A need exists to promote the discovery, development, and use of biomarkers across a range of applications. In addition, a process is needed to harmonize the global health community's decision making about what biomarkers are best suited for a given use under specific conditions and settings. To address these needs, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, US Department of Health and Human Services, organized a conference entitled “Biomarkers of Nutrition for Development: Building a Consensus,” which was hosted by the International Atomic Energy Agency. Partners included key multilateral, US agencies and public and private organizations. The assembly endorsed the utility of this initiative and the need for the BOND (Biomarkers of Nutrition for Development) project to continue. A consensus was reached on the requirement to develop a process to inform the community about the relative strengths or weaknesses and specific applications of various biomarkers under defined conditions. The articles in this supplement summarize the deliberations of the 4 working groups: research, clinical, policy, and programmatic. Also described are content presentations on the harmonization processes, the evidence base for biomarkers for 5 case-study micronutrients, and new frontiers in science and technology. PMID:21733880
Rosas Hernández, Ana María; Alejandre Carmona, Sergio; Rodríguez Sánchez, Javier Enrique; Castell Alcalá, Maria Victoria; Otero Puime, Ángel
2018-03-16
Identify the population over 70 year's old treated in primary care who should participate in a physical exercise program to prevent frailty. Analyze the concordance among 2criteria to select the beneficiary population of the program. Population-based cross-sectional study. Primary Care. Elderly over 70 years old, living in the Peñagrande neighborhood (Fuencarral district of Madrid) from the Peñagrande cohort, who accepted to participate in 2015 (n = 332). The main variable of the study is the need for exercise prescription in people over 70 years old at the Primary Care setting. It was identified through 2different definitions: Prefrail (1-2 of 5 Fried criteria) and Independent individuals with physical performance limited, defined by Consensus on frailty and falls prevention among the elderly (independent and with a total SPPB score <10). The 63,8% of participants (n = 196) need exercise prescription based on criteria defined by Fried and/or the consensus for prevention of frailty and falls in the elderly. In 82 cases the 2criteria were met, 80 were prefrail with normal physical performance and 34 were robust with a limited physical performance. The concordance among both criteria is weak (kappa index 0, 27). Almost 2thirds of the elderly have some kind of functional limitation. The criteria of the consensus document to prevent frailty detect half of the pre-frail individuals in the community. Copyright © 2018 The Authors. Publicado por Elsevier España, S.L.U. All rights reserved.
International consensus statement on allergy and rhinology: allergic rhinitis-executive summary.
Wise, Sarah K; Lin, Sandra Y; Toskala, Elina
2018-02-01
The available allergic rhinitis (AR) literature continues to grow. Critical evaluation and understanding of this literature is important to appropriately utilize this knowledge in the care of AR patients. The International Consensus statement on Allergy and Rhinology: Allergic Rhinitis (ICAR:AR) has been produced as a multidisciplinary international effort. This Executive Summary highlights and summarizes the findings of the comprehensive ICAR:AR document. The ICAR:AR document was produced using previously described methodology. Specific topics were developed relating to AR. Each topic was assigned a literature review, evidence-based review (EBR), or evidence-based review with recommendations (EBRR) format as dictated by available evidence and purpose within the ICAR:AR document. Following iterative reviews of each topic, the ICAR:AR document was synthesized and reviewed by all authors for consensus. Over 100 individual topics related to AR diagnosis, pathophysiology, epidemiology, disease burden, risk factors, allergy testing modalities, treatment, and other conditions/comorbidities associated with AR were addressed in the comprehensive ICAR:AR document. Herein, the Executive Summary provides a synopsis of these findings. In the ICAR:AR critical review of the literature, several strengths were identified. In addition, significant knowledge gaps exist in the AR literature where current practice is not based on the best quality evidence; these should be seen as opportunities for additional research. The ICAR:AR document evaluates the strengths and weaknesses of the AR literature. This Executive Summary condenses these findings into a short summary. The reader is also encouraged to consult the comprehensive ICAR:AR document for a thorough description of this work. © 2018 ARS-AAOA, LLC.
Mi, Tian; Merlin, Jerlin Camilus; Deverasetty, Sandeep; Gryk, Michael R; Bill, Travis J; Brooks, Andrew W; Lee, Logan Y; Rathnayake, Viraj; Ross, Christian A; Sargeant, David P; Strong, Christy L; Watts, Paula; Rajasekaran, Sanguthevar; Schiller, Martin R
2012-01-01
Minimotif Miner (MnM available at http://minimotifminer.org or http://mnm.engr.uconn.edu) is an online database for identifying new minimotifs in protein queries. Minimotifs are short contiguous peptide sequences that have a known function in at least one protein. Here we report the third release of the MnM database which has now grown 60-fold to approximately 300,000 minimotifs. Since short minimotifs are by their nature not very complex we also summarize a new set of false-positive filters and linear regression scoring that vastly enhance minimotif prediction accuracy on a test data set. This online database can be used to predict new functions in proteins and causes of disease.
Interferometric interrogation of π-phase shifted fiber Bragg grating sensors
NASA Astrophysics Data System (ADS)
Srivastava, Deepa; Tiwari, Umesh; Das, Bhargab
2018-03-01
Interferometric interrogation technique realized for conventional fiber Bragg grating (FBG) sensors is historically known to offer the highest sensitivity measurements, however, it has not been yet explored for π-phase-shifted FBG (πFBG) sensors. This, we believe, is due to the complex nature of the reflection/transmission spectrum of a πFBG, which cannot be directly used for interferometric interrogation purpose. Therefore, we propose here an innovative as well as simple concept towards this direction, wherein, the transmission spectrum of a πFBG sensor is optically filtered using a specially designed fiber grating. The resulting filtered spectrum retains the entire characteristics of a πFBG sensor and hence the filtered spectrum can be interrogated with interferometric principles. Furthermore, due to the extremely narrow transmission notch of a πFBG sensor, a fiber interferometer can be realized with significantly longer path difference. This leads to substantially enhanced detection limit as compared to sensors based on a regular FBG of similar length. Theoretical analysis demonstrates that high resolution weak dynamic strain measurement down to 4 pε /√{ Hz } is easily achievable. Preliminary experimental results are also presented as proof-of-concept of the proposed interrogation principle.
He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong
2011-01-01
This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM. PMID:22346682
He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong
2011-01-01
This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.
Aspart, Florian; Ladenbauer, Josef; Obermayer, Klaus
2016-11-01
Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial "ball-and-stick" (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with (and without) the influence of weak external electric fields.
Obermayer, Klaus
2016-01-01
Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial “ball-and-stick” (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with (and without) the influence of weak external electric fields. PMID:27893786
NASA Astrophysics Data System (ADS)
Hill, C. A.; Carmona, A.; Donati, J.-F.; Hussain, G. A. J.; Gregory, S. G.; Alencar, S. H. P.; Bouvier, J.; The Matysse Collaboration
2017-12-01
We report the results of our spectropolarimetric monitoring of the weak-line T-Tauri stars (wTTSs) Par 1379 and Par 2244, within the MaTYSSE (Magnetic Topologies of Young Stars and the Survival of close-in giant Exoplanets) programme. Both stars are of a similar mass (1.6 and 1.8 M⊙) and age (1.8 and 1.1 Myr), with Par 1379 hosting an evolved low-mass dusty circumstellar disc, and with Par 2244 showing evidence of a young debris disc. We detect profile distortions and Zeeman signatures in the unpolarized and circularly polarized lines for each star, and have modelled their rotational modulation using tomographic imaging, yielding brightness and magnetic maps. We find that Par 1379 harbours a weak (250 G), mostly poloidal field tilted 65° from the rotation axis. In contrast, Par 2244 hosts a stronger field (860 G) split 3:2 between poloidal and toroidal components, with most of the energy in higher order modes, and with the poloidal component tilted 45° from the rotation axis. Compared to the lower mass wTTSs, V819 Tau and V830 Tau, Par 2244 has a similar field strength, but is much more complex, whereas the much less complex field of Par 1379 is also much weaker than any other mapped wTTS. We find moderate surface differential rotation of 1.4× and 1.8× smaller than Solar, for Par 1379 and Par 2244, respectively. Using our tomographic maps to predict the activity-related radial velocity (RV) jitter, and filter it from the RV curves, we find RV residuals with dispersions of 0.017 and 0.086 km s-1 for Par 1379 and Par 2244, respectively. We find no evidence for close-in giant planets around either star, with 3σ upper limits of 0.56 and 3.54 MJup (at an orbital distance of 0.1 au).
Chaves, Eric N; Coelho, Ernane A A; Carvalho, Henrique T M; Freitas, Luiz C G; Júnior, João B V; Freitas, Luiz C
2016-09-01
This paper presents the design of a controller based on Internal Model Control (IMC) applied to a grid-connected single-phase PWM inverter. The mathematical modeling of the inverter and the LCL output filter, used to project the 1-DOF IMC controller, is presented and the decoupling of grid voltage by a Feedforward strategy is analyzed. A Proportional - Resonant Controller (P+Res) was used for the control of the same plant in the running of experimental results, thus moving towards the discussion of differences regarding IMC and P+Res performances, which arrived at the evaluation of the proposed control strategy. The results are presented for typical conditions, for weak-grid and for non-linear local load, in order to verify the behavior of the controller against such situations. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Allison, Michael; Atkinson, David H.; Hansen, James E. (Technical Monitor)
2001-01-01
Doppler radio tracking of the Galileo probe-to-orbiter relay, previously analyzed for its in situ measure of Jupiter's zonal wind at the equatorial entry site, also shows a record of significant residual fluctuations apparently indicative of varying vertical motions. Regular oscillations over pressure depth in the residual Doppler measurements of roughly 1-8 Hz (increasing upward), as filtered over a 134 sec window, are most plausibly interpreted as gravity waves, and imply a weak, but downward increasing static stability within the 5 - 20 bar region of Jupiter's atmosphere. A matched extension to deeper levels of an independent inertial stability constraint from the measured vertical wind shear at 1 - 4 bars is roughly consistent with a static stability of approximately 0.5 K/km near the 20 bar level, as independently detected by the probe Atmospheric Structure Instrument.
Reduced African Easterly Wave Activity with Quadrupled CO 2 in the Superparameterized CESM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hannah, Walter M.; Aiyyer, Anantha
African easterly wave (AEW) activity is examined in quadrupled CO 2 experiments with the superparameterized CESM (SP-CESM). The variance of 2–10-day filtered precipitation increases with warming over the West African monsoon region, suggesting increased AEW activity. The perturbation enstrophy budget is used to investigate the dynamic signature of AEW activity. The northern wave track becomes more active associated with enhanced baroclinicity, consistent with previous studies. The southern track exhibits a surprising reduction of wave activity associated with less frequent occurrence of weak waves and a slight increase in the occurrence of strong waves. These changes are connected to changes inmore » the profile of vortex stretching and tilting that can be understood as interconnected consequences of increased static stability from the lapse rate response, weak temperature gradient balance, and the fixed anvil temperature hypothesis.« less
Reduced African Easterly Wave Activity with Quadrupled CO 2 in the Superparameterized CESM
Hannah, Walter M.; Aiyyer, Anantha
2017-10-01
African easterly wave (AEW) activity is examined in quadrupled CO 2 experiments with the superparameterized CESM (SP-CESM). The variance of 2–10-day filtered precipitation increases with warming over the West African monsoon region, suggesting increased AEW activity. The perturbation enstrophy budget is used to investigate the dynamic signature of AEW activity. The northern wave track becomes more active associated with enhanced baroclinicity, consistent with previous studies. The southern track exhibits a surprising reduction of wave activity associated with less frequent occurrence of weak waves and a slight increase in the occurrence of strong waves. These changes are connected to changes inmore » the profile of vortex stretching and tilting that can be understood as interconnected consequences of increased static stability from the lapse rate response, weak temperature gradient balance, and the fixed anvil temperature hypothesis.« less
The Extent and Consequences of P-Hacking in Science
Head, Megan L.; Holman, Luke; Lanfear, Rob; Kahn, Andrew T.; Jennions, Michael D.
2015-01-01
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses. PMID:25768323
Hohmann, Erik; Brand, Jefferson C; Rossi, Michael J; Lubowitz, James H
2018-02-01
Our current trend and focus on evidence-based medicine is biased in favor of randomized controlled trials, which are ranked highest in the hierarchy of evidence while devaluing expert opinion, which is ranked lowest in the hierarchy. However, randomized controlled trials have weaknesses as well as strengths, and no research method is flawless. Moreover, stringent application of scientific research techniques, such as the Delphi Panel methodology, allows survey of experts in a high quality and scientific manner. Level V evidence (expert opinion) remains a necessary component in the armamentarium used to determine the answer to a clinical question. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Characterization of network structure in stereoEEG data using consensus-based partial coherence.
Ter Wal, Marije; Cardellicchio, Pasquale; LoRusso, Giorgio; Pelliccia, Veronica; Avanzini, Pietro; Orban, Guy A; Tiesinga, Paul He
2018-06-06
Coherence is a widely used measure to determine the frequency-resolved functional connectivity between pairs of recording sites, but this measure is confounded by shared inputs to the pair. To remove shared inputs, the 'partial coherence' can be computed by conditioning the spectral matrices of the pair on all other recorded channels, which involves the calculation of a matrix (pseudo-) inverse. It has so far remained a challenge to use the time-resolved partial coherence to analyze intracranial recordings with a large number of recording sites. For instance, calculating the partial coherence using a pseudoinverse method produces a high number of false positives when it is applied to a large number of channels. To address this challenge, we developed a new method that randomly aggregated channels into a smaller number of effective channels on which the calculation of partial coherence was based. We obtained a 'consensus' partial coherence (cPCOH) by repeating this approach for several random aggregations of channels (permutations) and only accepting those activations in time and frequency with a high enough consensus. Using model data we show that the cPCOH method effectively filters out the effect of shared inputs and performs substantially better than the pseudo-inverse. We successfully applied the cPCOH procedure to human stereotactic EEG data and demonstrated three key advantages of this method relative to alternative procedures. First, it reduces the number of false positives relative to the pseudo-inverse method. Second, it allows for titration of the amount of false positives relative to the false negatives by adjusting the consensus threshold, thus allowing the data-analyst to prioritize one over the other to meet specific analysis demands. Third, it substantially reduced the number of identified interactions compared to coherence, providing a sparser network of connections from which clear spatial patterns emerged. These patterns can serve as a starting point of further analyses that provide insight into network dynamics during cognitive processes. These advantages likely generalize to other modalities in which shared inputs introduce confounds, such as electroencephalography (EEG) and magneto-encephalography (MEG). Copyright © 2018. Published by Elsevier Inc.
Optimizing weak lensing mass estimates for cluster profile uncertainty
Gruen, D.; Bernstein, G. M.; Lam, T. Y.; ...
2011-09-11
Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M 200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement M ap that minimizes the mass estimate variance <(M ap - M 200m) 2> in the presence of allmore » these forms of variability. Depending on halo mass and observational conditions, the resulting mass estimator improves on M ap filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.« less
Ashley, Kevin; Brisson, Michael J; Howe, Alan M; Bartley, David L
2009-12-01
A collaborative interlaboratory evaluation of a newly standardized inductively coupled plasma mass spectrometry (ICP-MS) method for determining trace beryllium in workplace air samples was carried out toward fulfillment of method validation requirements for ASTM International voluntary consensus standard test methods. The interlaboratory study (ILS) was performed in accordance with an applicable ASTM International standard practice, ASTM E691, which describes statistical procedures for investigating interlaboratory precision. Uncertainty was also estimated in accordance with ASTM D7440, which applies the International Organization for Standardization Guide to the Expression of Uncertainty in Measurement to air quality measurements. Performance evaluation materials (PEMs) used consisted of 37 mm diameter mixed cellulose ester filters that were spiked with beryllium at levels of 0.025 (low loading), 0.5 (medium loading), and 10 (high loading) microg Be/filter; these spiked filters were prepared by a contract laboratory. Participating laboratories were recruited from a pool of over 50 invitees; ultimately, 20 laboratories from Europe, North America, and Asia submitted ILS results. Triplicates of each PEM (blanks plus the three different loading levels) were conveyed to each volunteer laboratory, along with a copy of the draft standard test method that each participant was asked to follow; spiking levels were unknown to the participants. The laboratories were requested to prepare the PEMs by one of three sample preparation procedures (hotplate or microwave digestion or hotblock extraction) that were described in the draft standard. Participants were then asked to analyze aliquots of the prepared samples by ICP-MS and to report their data in units of mu g Be/filter sample. Interlaboratory precision estimates from participating laboratories, computed in accordance with ASTM E691, were 0.165, 0.108, and 0.151 (relative standard deviation) for the PEMs spiked at 0.025, 0.5, and 10 microg Be/filter, respectively. Overall recoveries were 93.2%, 102%, and 80.6% for the low, medium, and high beryllium loadings, respectively. Expanded uncertainty estimates for interlaboratory analysis of low, medium, and high beryllium loadings, calculated in accordance with ASTM D7440, were 18.8%, 19.8%, and 24.4%, respectively. These figures of merit support promulgation of the analytical procedure as an ASTM International standard test method, ASTM D7439.
DEVELOPMENT OF A LAMINATED DISK FOR THE SPIN TEK ROTARY MICROFILTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, D.
2011-06-03
Funded by the Department of Energy Office of Environmental Management, EM-31, the Savannah River National Laboratory (SRNL) partnered with SpinTek Filtration{trademark} to develop a filter disk that would withstand a reverse pressure or flow during operation of the rotary microfilter. The ability to withstand a reverse pressure and flow eliminates a potential accident scenario that could have resulted in damage to the filter membranes. While the original welded filter disks have been shown to withstand and reverse pressure/flow in the static condition, the filter disk design discussed in this report will allow a reverse pressure/flow while the disks are rotating.more » In addition, the laminated disk increases the flexibility during filter startup and cleaning operations. The new filter disk developed by SRNL and SpinTek is manufactured with a more open structure significantly reducing internal flow restrictions in the disk. The prototype was tested at the University of Maryland and demonstrated to withstand the reverse pressure due to the centrifugal action of the rotary filter. The tested water flux of the disk was demonstrated to be 1.34 gpm in a single disk test. By comparison, the water flux of the current disk was 0.49 gpm per disk during a 25 disk test. The disk also demonstrated rejection of solids by filtering a 5 wt % Strontium Carbonate slurry with a filtrate clarity of less the 1.4 Nephelometric Turbidity Units (NTU) throughout the two hour test. The Savannah River National Laboratory (SRNL) has been working with SpinTek Filtration{trademark} to adapt the rotary microfilter for radioactive service in the Department of Energy (DOE) Complex. One potential weakness is the loose nature of the membrane on the filter disks. The current disk is constructed by welding the membrane at the outer edge of the disk. The seal for the center of the membrane is accomplished by an o-ring in compression for the assembled stack. The remainder of the membrane is free floating on the disk. This construction requires that a positive pressure be applied to the rotary filter tank to prevent the membrane from rising from the disk structure and potentially contacting the filter turbulence promoter. In addition, one accident scenario is a reverse flow through the filtrate line due to mis-alignment of valves resulting in the membrane rising from the disk structure. The structural integrity of the current disk has been investigated, and shown that the disk can withstand a significant reverse pressure in a static condition. However, the disk will likely incur damage if the filter stack is rotated during a reverse pressure. The development of a laminated disk would have several significant benefits for the operation of the rotary filter including the prevention of a compromise in filter disk integrity during a reverse flow accident, increasing operational flexibility, and increasing the self cleaning ability of the filter. A laminated disk would allow the filter rotor operation prior to a positive pressure in the filter tank. This would prevent the initial dead-head of the filter and prevent the resulting initial filter cake buildup. The laminated disk would allow rotor operation with cleaning fluid, eliminating the need for a recirculation pump. Additionally, a laminated disk would allow a reverse flow of fluid through the membrane pores removing trapped particles.« less
Bowtie filters for dedicated breast CT: Theory and computational implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.
Purpose: To design bowtie filters with improved properties for dedicated breast CT to improve image quality and reduce dose to the patient. Methods: The authors present three different bowtie filters designed for a cylindrical 14-cm diameter phantom with a uniform composition of 40/60 breast tissue, which vary in their design objectives and performance improvements. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis material decomposition to produce the same spectral shape and intensity at the detector, using two differentmore » materials. Bowtie design #3 eliminates the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. All three designs are obtained using analytical computational methods and linear attenuation coefficients. Thus, the designs do not take into account the effects of scatter. The authors considered this to be a reasonable approach to the filter design problem since the use of Monte Carlo methods would have been computationally intensive. The filter profiles for a cone-angle of 0° were used for the entire length of each filter because the differences between those profiles and the correct cone-beam profiles for the cone angles in our system are very small, and the constant profiles allowed construction of the filters with the facilities available to us. For evaluation of the filters, we used Monte Carlo simulation techniques and the full cone-beam geometry. Images were generated with and without each bowtie filter to analyze the effect on dose distribution, noise uniformity, and contrast-to-noise ratio (CNR) homogeneity. Line profiles through the reconstructed images generated from the simulated projection images were also used as validation for the filter designs. Results: Examples of the three designs are presented. Initial verification of performance of the designs was done using analytical computations of HVL, intensity, and effective attenuation coefficient behind the phantom as a function of fan-angle with a cone-angle of 0°. The performance of the designs depends only weakly on incident spectrum and tissue composition. For all designs, the dynamic range requirement on the detector was reduced compared to the no-bowtie-filter case. Further verification of the filter designs was achieved through analysis of reconstructed images from simulations. Simulation data also showed that the use of our bowtie filters can reduce peripheral dose to the breast by 61% and provide uniform noise and CNR distributions. The bowtie filter design concepts validated in this work were then used to create a computational realization of a 3D anthropomorphic bowtie filter capable of achieving a constant effective attenuation coefficient behind the entire field-of-view of an anthropomorphic breast phantom. Conclusions: Three different bowtie filter designs that vary in performance improvements were described and evaluated using computational and simulation techniques. Results indicate that the designs are robust against variations in breast diameter, breast composition, and tube voltage, and that the use of these filters can reduce patient dose and improve image quality compared to the no-bowtie-filter case.« less
Deng, Haishan; Shang, Erxin; Xiang, Bingren; Xie, Shaofei; Tang, Yuping; Duan, Jin-ao; Zhan, Ying; Chi, Yumei; Tan, Defei
2011-03-15
The stochastic resonance algorithm (SRA) has been developed as a potential tool for amplifying and determining weak chromatographic peaks in recent years. However, the conventional SRA cannot be applied directly to ultra-performance liquid chromatography/time-of-flight mass spectrometry (UPLC/TOFMS). The obstacle lies in the fact that the narrow peaks generated by UPLC contain high-frequency components which fall beyond the restrictions of the theory of stochastic resonance. Although there already exists an algorithm that allows a high-frequency weak signal to be detected, the sampling frequency of TOFMS is not fast enough to meet the requirement of the algorithm. Another problem is the depression of the weak peak of the compound with low concentration or weak detection response, which prevents the simultaneous determination of multi-component UPLC/TOFMS peaks. In order to lower the frequencies of the peaks, an interpolation and re-scaling frequency stochastic resonance (IRSR) is proposed, which re-scales the peak frequencies via linear interpolating sample points numerically. The re-scaled UPLC/TOFMS peaks could then be amplified significantly. By introducing an external energy field upon the UPLC/TOFMS signals, the method of energy gain was developed to simultaneously amplify and determine weak peaks from multi-components. Subsequently, a multi-component stochastic resonance algorithm was constructed for the simultaneous quantitative determination of multiple weak UPLC/TOFMS peaks based on the two methods. The optimization of parameters was discussed in detail with simulated data sets, and the applicability of the algorithm was evaluated by quantitative analysis of three alkaloids in human plasma using UPLC/TOFMS. The new algorithm behaved well in the improvement of signal-to-noise (S/N) compared to several normally used peak enhancement methods, including the Savitzky-Golay filter, Whittaker-Eilers smoother and matched filtration. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kong, Yun; Wang, Tianyang; Li, Zheng; Chu, Fulei
2017-09-01
Planetary transmission plays a vital role in wind turbine drivetrains, and its fault diagnosis has been an important and challenging issue. Owing to the complicated and coupled vibration source, time-variant vibration transfer path, and heavy background noise masking effect, the vibration signal of planet gear in wind turbine gearboxes exhibits several unique characteristics: Complex frequency components, low signal-to-noise ratio, and weak fault feature. In this sense, the periodic impulsive components induced by a localized defect are hard to extract, and the fault detection of planet gear in wind turbines remains to be a challenging research work. Aiming to extract the fault feature of planet gear effectively, we propose a novel feature extraction method based on spectral kurtosis and time wavelet energy spectrum (SK-TWES) in the paper. Firstly, the spectral kurtosis (SK) and kurtogram of raw vibration signals are computed and exploited to select the optimal filtering parameter for the subsequent band-pass filtering. Then, the band-pass filtering is applied to extrude periodic transient impulses using the optimal frequency band in which the corresponding SK value is maximal. Finally, the time wavelet energy spectrum analysis is performed on the filtered signal, selecting Morlet wavelet as the mother wavelet which possesses a high similarity to the impulsive components. The experimental signals collected from the wind turbine gearbox test rig demonstrate that the proposed method is effective at the feature extraction and fault diagnosis for the planet gear with a localized defect.
Coronary artery segmentation in X-ray angiograms using gabor filters and differential evolution.
Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Cordova-Fraga, Teodoro; Aviña-Cervantes, Juan Gabriel
2018-08-01
Segmentation of coronary arteries in X-ray angiograms represents an essential task for computer-aided diagnosis, since it can help cardiologists in diagnosing and monitoring vascular abnormalities. Due to the main disadvantages of the X-ray angiograms are the nonuniform illumination, and the weak contrast between blood vessels and image background, different vessel enhancement methods have been introduced. In this paper, a novel method for blood vessel enhancement based on Gabor filters tuned using the optimization strategy of Differential evolution (DE) is proposed. Because the Gabor filters are governed by three different parameters, the optimal selection of those parameters is highly desirable in order to maximize the vessel detection rate while reducing the computational cost of the training stage. To obtain the optimal set of parameters for the Gabor filters, the area (Az) under the receiver operating characteristics curve is used as objective function. In the experimental results, the proposed method achieves an A z =0.9388 in a training set of 40 images, and for a test set of 40 images it obtains the highest performance with an A z =0.9538 compared with six state-of-the-art vessel detection methods. Finally, the proposed method achieves an accuracy of 0.9423 for vessel segmentation using the test set. In addition, the experimental results have also shown that the proposed method can be highly suitable for clinical decision support in terms of computational time and vessel segmentation performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
Zhen, Qi; Zhang, Min; Song, Wenlan; Wang, Huiju; Wang, Xuemei; Du, Xinzhen
2016-10-01
An oriented titanium-nickel oxide composite nanotubes coating was in situ grown on a nitinol wire by direct electrochemical anodization in ethylene glycol with ammonium fluoride and water for the first time. The morphology and composition of the resulting coating showed that the anodized nitinol wire provided a titania-rich coating. The titanium-nickel oxide composite nanotubes coated fiber was used for solid-phase microextraction of different aromatic compounds coupled to high-performance liquid chromatography with UV detection. The titanium-nickel oxide composite nanotubes coating exhibited high extraction capability, good selectivity, and rapid mass transfer for weakly polar UV filters. Thereafter the important parameters affecting extraction efficiency were investigated for solid-phase microextraction of UV filters. Under the optimized conditions, the calibration curves were linear in the range of 0.1-300 μg/L for target UV filters with limits of detection of 0.019-0.082 μg/L. The intraday and interday precision of the proposed method with the single fiber were 5.3-7.2 and 5.9-7.9%, respectively, and the fiber-to-fiber reproducibility ranged from 6.3 to 8.9% for four fibers fabricated in different batches. Finally, its applicability was evaluated by the extraction and determination of target UV filters in environmental water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map
Chang, C.; Sheldon, E.; Pujol, A.; ...
2018-01-04
We construct the largest curved-sky galaxy weak lensing mass map to date from the DES firstyear (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous ≈1;500 deg 2, covering a comoving volume of ≈10 Gpc 3. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogs, METACALIBRATION and IM3SHAPE, with sources at redshift 0:2 < z < 1:3; and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal-to-noise in themore » E-mode and the B-mode map is ~1.5 (~2) when smoothed with a Gaussian filter of sG =30 (80) arcminutes. The second and third moments of the convergence k in the maps are in agreement with simulations. We also find no significant correlation of k with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation with different foreground tracers of mass and (2) exploration of the largest peaks and voids in the maps.« less
Monostatic lidar in weak-to-strong turbulence
NASA Astrophysics Data System (ADS)
Andrews, L. C.; Phillips, R. L.
2001-07-01
A heuristic scintillation model previously developed for weak-to-strong irradiance fluctuations of a spherical wave is extended in this paper to the case of a monostatic lidar configuration. As in the previous model, we account for the loss of spatial coherence as the optical wave propagates through atmospheric turbulence by eliminating the effects of certain turbulent scale sizes that exist between the scale size of the spatial coherence radius of the beam and that of the scattering disc. These mid-range scale-size effects are eliminated through the formal introduction of spatial scale frequency filters that continually adjust spatial cut-off frequencies as the optical wave propagates. In addition, we also account for correlations that exist in the incident wave to the target and the echo wave from the target arising from double-pass propagation through the same random inhomogeneities of the atmosphere. We separately consider the case of a point target and a diffuse target, concentrating on both the enhanced backscatter effect in the mean irradiance and the increase in scintillation in a monostatic channel. Under weak and strong irradiance fluctuations our asymptotic expressions are in agreement with previously published asymptotic results.
Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.; Sheldon, E.; Pujol, A.
We construct the largest curved-sky galaxy weak lensing mass map to date from the DES firstyear (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous ≈1;500 deg 2, covering a comoving volume of ≈10 Gpc 3. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogs, METACALIBRATION and IM3SHAPE, with sources at redshift 0:2 < z < 1:3; and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal-to-noise in themore » E-mode and the B-mode map is ~1.5 (~2) when smoothed with a Gaussian filter of sG =30 (80) arcminutes. The second and third moments of the convergence k in the maps are in agreement with simulations. We also find no significant correlation of k with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation with different foreground tracers of mass and (2) exploration of the largest peaks and voids in the maps.« less
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang
2016-02-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses hidden in vibration signals and performs well for bearing fault diagnosis.
Wang, Fang; Meng, Dan; Li, Xiuwei; Tan, Junjie
2016-08-01
Indoor and outdoor air PM2.5 concentrations in four residential dwellings characterized with different building envelope air tightness levels and HVAC-filter configurations in Yangtze River Delta (YRD) were measured during winter periods in 2014-2015. Steady-state models for indoor PM2.5 were developed for each of the tested dwellings, based on mass balance equation. The indoor air PM2.5 concentrations in the four tested apartments were significantly different. The lowest geometric mean values of indoor air PM2.5 concentrations, I/O ratios, and infiltration factor were observed in D3 with high air tightness and without HVAC-filter system (26.0 μg/m(3), 0.197, and 0.167, respectively), while the highest geometric mean values of indoor air PM2.5 concentrations, I/O ratios, and infiltration factor were observed in D1 (64.9 μg/m(3), 0.876, and 0.867, respectively). For apartment D1 with normal air tightness and without any HVAC-filter system, indoor air PM2.5 concentrations were significantly correlated with outdoor PM2.5 concentrations, especially in severe ambient pollution days, when closed windows can only play a very weak role on the decline of indoor PM2.5 concentrations. With the enhancement of building air tightness, the indoor air PM2.5 concentrations can be decreased effectively and don't vary as much in response to fluctuations in ambient concentrations. For buildings with normal air tightness, the use of HVAC-filter combinations will decrease the indoor PM2.5 significantly. However, for buildings with enhanced air tightness, the only use of fresh makeup air supply system with filter may increase the indoor PM2.5 concentrations. The improvement of filter efficiency for both fresh makeup air and indoor recirculated air are very important. However, purifiers for indoor recirculated air were highly recommended for all buildings. Copyright © 2016 Elsevier Ltd. All rights reserved.
What is the best term in Spanish to express the concept of cancer-related fatigue?
Centeno, Carlos; Portela Tejedor, María Angustias; Carvajal, Ana; San Miguel, Maria Teresa; Urdiroz, Julia; Ramos, Luis; De Santiago, Ana
2009-05-01
Fatigue is one of the most frequent symptoms in patients with cancer. No adequate term in Spanish has been defined to describe the English concept of fatigue. To identify the most suitable Spanish words that define the concept of fatigue and to check psychometric characteristics. Consensus with professional experts on Spanish words that best suit the English concept of fatigue. A prospective study on oncologic patients was also undertaken, which included an evaluation of the intensity of fatigue through visual numeric scales (VNS) where the words had been previously selected. The fatigue subscale of the Functional Assessment of Cancer Therapy-Fatigue (FACT-F) questionnaire was taken as a reference. The experts highlighted the words cansancio, agotamiento, and debilidad (tiredness, exhaustion, and weakness) as the terms that best defined the concept of fatigue. In the psychometric assessment study, 100 patients were included, of which 61 (61%) presented diagnostic values for cancer-related fatigue in the FACT-F fatigue subscale (score 34/52 or lower). The VNS for the chosen terms obtained a high correlation with the FACT-F fatigue subscale results: cansancio (tiredness) r = -0.71, agotamiento (exhaustion) r = -0.74, debilidad (weakness) r = -0.74, with no statistical differences between them. For the detection of fatigue by means of the VNS, tiredness (cutoff point > or =4/10) gave sensitivity (S) 0.90 and specificity (E) 0.72; exhaustion (cutoff point > or =3/10) S 0.95 and E 0.90 and weakness (cutoff point > or =4/10) S 0.92 and E 0.72. The ROC curve was 0.88 for tiredness, 0.94 for exhaustion, and 0.92 for weakness, with no significant difference between the areas mentioned. The terms cansancio, agotamiento, and debilidad (tiredness, exhaustion, and weakness) are suitable for defining the English concept of fatigue in Spanish, and should be the preferred option for inclusion in evaluation tools.
LoCuSS: weak-lensing mass calibration of galaxy clusters
NASA Astrophysics Data System (ADS)
Okabe, Nobuhiro; Smith, Graham P.
2016-10-01
We present weak-lensing mass measurements of 50 X-ray luminous galaxy clusters at 0.15 ≤ z ≤ 0.3, based on uniform high-quality observations with Suprime-Cam mounted on the 8.2-m Subaru telescope. We pay close attention to possible systematic biases, aiming to control them at the ≲4 per cent level. The dominant source of systematic bias in weak-lensing measurements of the mass of individual galaxy clusters is contamination of background galaxy catalogues by faint cluster and foreground galaxies. We extend our conservative method for selecting background galaxies with (V - I') colours redder than the red sequence of cluster members to use a colour-cut that depends on cluster-centric radius. This allows us to define background galaxy samples that suffer ≤1 per cent contamination, and comprise 13 galaxies per square arcminute. Thanks to the purity of our background galaxy catalogue, the largest systematic that we identify in our analysis is a shape measurement bias of 3 per cent, that we measure using simulations that probe weak shears up to g = 0.3. Our individual cluster mass and concentration measurements are in excellent agreement with predictions of the mass-concentration relation. Equally, our stacked shear profile is in excellent agreement with the Navarro Frenk and White profile. Our new Local Cluster Substructure Survey mass measurements are consistent with the Canadian Cluster Cosmology Project and Cluster Lensing And Supernova Survey with Hubble surveys, and in tension with the Weighing the Giants at ˜1σ-2σ significance. Overall, the consensus at z ≤ 0.3 that is emerging from these complementary surveys represents important progress for cluster mass calibration, and augurs well for cluster cosmology.
Recommendations for the management of biofilm: a consensus document.
Bianchi, T; Wolcott, R D; Peghetti, A; Leaper, D; Cutting, K; Polignano, R; Rosa Rita, Z; Moscatelli, A; Greco, A; Romanelli, M; Pancani, S; Bellingeri, A; Ruggeri, V; Postacchini, L; Tedesco, S; Manfredi, L; Camerlingo, Maria; Rowan, S; Gabrielli, A; Pomponio, G
2016-06-01
The potential impact of biofilm on healing in acute and chronic wounds is one of the most controversial current issues in wound care. A significant amount of laboratory-based research has been carried out on this topic, however, in 2013 the European Wound Management Association (EWMA) pointed out the lack of guidance for managing biofilms in clinical practice and solicited the need for guidelines and further clinical research. In response to this challenge, the Italian Nursing Wound Healing Society (AISLeC) initiated a project which aimed to achieve consensus among a multidisciplinary and multiprofessional international panel of experts to identify what could be considered part of 'good clinical practice' with respect to the recognition and management of biofilms in acute and chronic wounds. The group followed a systematic approach, developed by the GRADE working group, to define relevant questions and clinical recommendations raised in clinical practice. An independent librarian retrieved and screened approximately 2000 pertinent published papers to produce tables of levels of evidence. After a smaller focus group had a multistep structured discussion, and a formal voting process had been completed, ten therapeutic interventions were identified as being strongly recommendable for clinical practice, while another four recommendations were graded as being 'weak'. The panel subsequently formulated a preliminary statement (although with a weak grade of agreement): 'provided that other causes that prevent optimal wound healing have been ruled out, chronic wounds are chronically infected'. All members of the panel agreed that there is a paucity of reliable, well-conducted clinical trials which have produced clear evidence related to the effects of biofilm presence. In the meantime it was agreed that expert-based guidelines were needed to be developed for the recognition and management of biofilms in wounds and for the best design of future clinical trials. This is a fundamental and urgent task for both laboratory-based scientists and clinicians.
Relevance of quantum mechanics on some aspects of ion channel function
Roy, Sisir
2010-01-01
Mathematical modeling of ionic diffusion along K ion channels indicates that such diffusion is oscillatory, at the weak non-Markovian limit. This finding leads us to derive a Schrödinger–Langevin equation for this kind of system within the framework of stochastic quantization. The Planck’s constant is shown to be relevant to the Lagrangian action at the level of a single ion channel. This sheds new light on the issue of applicability of quantum formalism to ion channel dynamics and to the physical constraints of the selectivity filter. PMID:19520314
DEAP-3600 Data Acquisition System
NASA Astrophysics Data System (ADS)
Lindner, Thomas
2015-12-01
DEAP-3600 is a dark matter experiment using liquid argon to detect Weakly Interacting Massive Particles (WIMPs). The DEAP-3600 Data Acquisition (DAQ) has been built using a combination of commercial and custom electronics, organized using the MIDAS framework. The DAQ system needs to suppress a high rate of background events from 39Ar beta decays. This suppression is implemented using a combination of online firmware and software-based event filtering. We will report on progress commissioning the DAQ system, as well as the development of the web-based user interface.
Virtual screening for potential inhibitors of bacterial MurC and MurD ligases.
Tomašić, Tihomir; Kovač, Andreja; Klebe, Gerhard; Blanot, Didier; Gobec, Stanislav; Kikelj, Danijel; Mašič, Lucija Peterlin
2012-03-01
Mur ligases are bacterial enzymes involved in the cytoplasmic steps of peptidoglycan biosynthesis and are viable targets for antibacterial drug discovery. We have performed virtual screening for potential ATP-competitive inhibitors targeting MurC and MurD ligases, using a protocol of consecutive hierarchical filters. Selected compounds were evaluated for inhibition of MurC and MurD ligases, and weak inhibitors possessing dual inhibitory activity have been identified. These compounds represent new scaffolds for further optimisation towards multiple Mur ligase inhibitors with improved inhibitory potency.
Interactive Visualization of DGA Data Based on Multiple Views
NASA Astrophysics Data System (ADS)
Geng, Yujie; Lin, Ying; Ma, Yan; Guo, Zhihong; Gu, Chao; Wang, Mingtao
2017-01-01
The commission and operation of dissolved gas analysis (DGA) online monitoring makes up for the weakness of traditional DGA method. However, volume and high-dimensional DGA data brings a huge challenge for monitoring and analysis. In this paper, we present a novel interactive visualization model of DGA data based on multiple views. This model imitates multi-angle analysis by combining parallel coordinates, scatter plot matrix and data table. By offering brush, collaborative filter and focus + context technology, this model provides a convenient and flexible interactive way to analyze and understand the DGA data.
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-01-01
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-07-19
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.
An innovative approach to capability-based emergency operations planning
Keim, Mark E
2013-01-01
This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology. PMID:28228987
An innovative approach to capability-based emergency operations planning.
Keim, Mark E
2013-01-01
This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology.
Distributed estimation for adaptive sensor selection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Hassan Hamid, Matasm M.
2014-05-01
Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.
Roh, Young Hak; Noh, Jung Ho; Gong, Hyun Sik; Baek, Goo Hyun
2017-12-01
Patients with low appendicular lean mass plus slow gait speed or weak grip strength are at risk for poor functional recovery after surgery for distal radius fracture, even when they have similar radiologic outcomes. Loss of skeletal muscle mass and consequent loss in muscle function associate with aging, and this condition negatively impacts the activities of daily living and increases elderly individuals' frailty to falls. Thus, patients with low appendicular lean mass would show different functional recovery compared to those without this condition after surgery for distal radius fracture (DRF). This study compares the functional outcomes after surgery for DRF in patients with or without low appendicular lean mass plus slowness or weakness. A total of 157 patients older than 50 years of age with a DRF treated via volar plate fixation were enrolled in this prospective study. A definition of low appendicular lean mass with slowness or weakness was based on the consensus of the Asian Working Group for Sarcopenia. The researchers compared functional assessments (wrist range of motion and Michigan Hand Questionnaire [MHQ]) and radiographic assessments (radial inclination, volar tilt, ulnar variance, and articular congruity) 12 months after surgery between patients with and without low appendicular lean mass plus slowness or weakness. Multivariable regression analyses were performed to determine whether appendicular lean mass, grip strength, gait speed, patient demographic, or injury characteristics accounted for the functional outcomes. Patients with low appendicular lean mass plus slowness or weakness showed a significantly lower recovery of MHQ score than those in the control group throughout 12 months. There was no significant difference in the range of motion between the groups. The radiologic outcomes showed no significant difference between groups in terms of volar tilt, radial inclination, or ulnar variance. According to multivariable regression analysis, the poor recovery of MHQ score was associated with an increase in age, weak grip strength, and lower appendicular lean mass, and these three factors accounted for 37% of the variation in the MHQ scores. Patients with low appendicular lean mass plus slowness or weakness are at risk for poor functional recovery after surgery for DRF, even when they have similar radiologic outcomes.
Acoustic Emission Detected by Matched Filter Technique in Laboratory Earthquake Experiment
NASA Astrophysics Data System (ADS)
Wang, B.; Hou, J.; Xie, F.; Ren, Y.
2017-12-01
Acoustic Emission in laboratory earthquake experiment is a fundamental measures to study the mechanics of the earthquake for instance to characterize the aseismic, nucleation, as well as post seismic phase or in stick slip experiment. Compared to field earthquake, AEs are generally recorded when they are beyond threshold, so some weak signals may be missing. Here we conducted an experiment on a 1.1m×1.1m granite with a 1.5m fault and 13 receivers with the same sample rate of 3MHz are placed on the surface. We adopt continues record and a matched filter technique to detect low-SNR signals. We found there are too many signals around the stick-slip and the P- arrival picked by manual may be time-consuming. So, we combined the short-term average to long-tem-average ratio (STA/LTA) technique with Autoregressive-Akaike information criterion (AR-AIC) technique to pick the arrival automatically and found mostly of the P- arrival accuracy can satisfy our demand to locate signals. Furthermore, we will locate the signals and apply a matched filter technique to detect low-SNR signals. Then, we can see if there is something interesting in laboratory earthquake experiment. Detailed and updated results will be present in the meeting.
Fan, Bingfei; Li, Qingguo; Liu, Tao
2017-12-28
With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelliccia, Daniele; Vaz, Raquel; Svalbe, Imants
X-ray imaging of soft tissue is made difficult by their low absorbance. The use of x-ray phase imaging and tomography can significantly enhance the detection of these tissues and several approaches have been proposed to this end. Methods such as analyzer-based imaging or grating interferometry produce differential phase projections that can be used to reconstruct the 3D distribution of the sample refractive index. We report on the quantitative comparison of three different methods to obtain x-ray phase tomography with filtered back-projection from differential phase projections in the presence of noise. The three procedures represent different numerical approaches to solve themore » same mathematical problem, namely phase retrieval and filtered back-projection. It is found that obtaining individual phase projections and subsequently applying a conventional filtered back-projection algorithm produces the best results for noisy experimental data, when compared with other procedures based on the Hilbert transform. The algorithms are tested on simulated phantom data with added noise and the predictions are confirmed by experimental data acquired using a grating interferometer. The experiment is performed on unstained adult zebrafish, an important model organism for biomedical studies. The method optimization described here allows resolution of weak soft tissue features, such as muscle fibers.« less
NASA Astrophysics Data System (ADS)
Fayadh, Rashid A.; Malek, F.; Fadhil, Hilal A.; Aldhaibani, Jaafar A.; Salman, M. K.; Abdullah, Farah Salwani
2015-05-01
For high data rate propagation in wireless ultra-wideband (UWB) communication systems, the inter-symbol interference (ISI), multiple-access interference (MAI), and multiple-users interference (MUI) are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by direct sequence spread spectrum (DS-SS) technique. The adaptive rake-receiver structure was shown with adjusting the receiver tap weights using least mean squares (LMS), normalized least mean squares (NLMS), and affine projection algorithms (APA) to support the weak signals by noise cancellation and mitigate the interferences. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of partial-updates (PU) adaptive filters were employed with algorithms, such as sequential-partial, periodic-partial, M-max-partial, and selective-partial updates (SPU) in the proposed system. The simulation results of bit error rate (BER) versus signal-to-noise ratio (SNR) are illustrated to show the performance of partial-update algorithms that have nearly comparable performance with the full update adaptive filters. Furthermore, the SPU-partial has closed performance to the full-NLMS and full-APA while the M-max-partial has closed performance to the full-LMS updates algorithms.
On event-based optical flow detection
Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko
2015-01-01
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470
Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.
Tan, Yin-Yin Sarah; Mueller, Shane T
2016-09-01
Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.
Jiménez-Herranz, Borja; Manrique-Arribas, Juan C; López-Pastor, Víctor M; García-Bengoechea, Enrique
2016-10-01
This research applies a communicative methodology (CM) to the transformation and improvement of the Municipal Comprehensive School Sports Programme in Segovia, Spain (MCSSP), using egalitarian dialogue, based on validity rather than power claims to achieve intersubjectivity and arrive at consensus between all of the Programme's stakeholders through the intervention of an advisory committee (AC). The AC is a body comprising representatives of all stakeholder groups involved in the programme. During the 2013-2014 academic year the programme's AC met four times, operating as a communicative focus group (CFG). The meetings focused on: (1) excluding dimensions (barriers preventing transformation) and transforming dimensions (ways of overcoming barriers), (2) the programme's strengths, (3) the programme's weaknesses and specific actions to remedy them, and (4) the resulting conclusions which were then incorporated into the subsequent programme contract signed between the University and the Segovia Local Authority for 2014-2018. The key conclusions were: (1) the recommendations of the AC widen the range of perspectives and help the research team to make key decisions and (2) the use of CM to fully evaluate the programme and to reach a consensus on how to improve it proved very valuable. Copyright © 2016 Elsevier Ltd. All rights reserved.
Emergence of a rehabilitation medicine model for low vision service delivery, policy, and funding.
Stelmack, Joan
2005-05-01
A rehabilitation medicine model for low vision rehabilitation is emerging. There have been many challenges to reaching consensus on the roles of each discipline (optometry, ophthalmology, occupational therapy, and vision rehabilitation professionals) in the service delivery model and finding a place in the reimbursement system for all the providers. The history of low vision, legislation associated with Centers for Medicare and Medicaid Services coverage for vision rehabilitation, and research on the effectiveness of low vision service delivery are reviewed. Vision rehabilitation is now covered by Medicare under Physical Medicine and Rehabilitation codes by some Medicare carriers, yet reimbursement is not available for low vision devices or refraction. Also, the role of vision rehabilitation professionals (rehabilitation teachers, orientation and mobility specialists, and low vision therapists) in the model needs to be determined. In a recent systematic review of the scientific literature on the effectiveness of low vision services contracted by the Agency for Health Care Quality Research, no clinical trials were found. The literature consists primarily of longitudinal case studies, which provide weak support for third-party funding for vision rehabilitative services. Providers need to reach consensus on medical necessity, treatment plans, and protocols. Research on low vision outcomes is needed to develop an evidence base to guide clinical practice, policy, and funding decisions.
Welch, Vivian A; Akl, Elie A; Pottie, Kevin; Ansari, Mohammed T; Briel, Matthias; Christensen, Robin; Dans, Antonio; Dans, Leonila; Eslava-Schmalbach, Javier; Guyatt, Gordon; Hultcrantz, Monica; Jull, Janet; Katikireddi, Srinivasa Vittal; Lang, Eddy; Matovinovic, Elizabeth; Meerpohl, Joerg J; Morton, Rachael L; Mosdol, Annhild; Murad, M Hassan; Petkovic, Jennifer; Schünemann, Holger; Sharaf, Ravi; Shea, Bev; Singh, Jasvinder A; Solà, Ivan; Stanev, Roger; Stein, Airton; Thabaneii, Lehana; Tonia, Thomy; Tristan, Mario; Vitols, Sigurd; Watine, Joseph; Tugwell, Peter
2017-10-01
The aim of this paper is to describe a conceptual framework for how to consider health equity in the Grading Recommendations Assessment and Development Evidence (GRADE) guideline development process. Consensus-based guidance developed by the GRADE working group members and other methodologists. We developed consensus-based guidance to help address health equity when rating the certainty of synthesized evidence (i.e., quality of evidence). When health inequity is determined to be a concern by stakeholders, we propose five methods for explicitly assessing health equity: (1) include health equity as an outcome; (2) consider patient-important outcomes relevant to health equity; (3) assess differences in the relative effect size of the treatment; (4) assess differences in baseline risk and the differing impacts on absolute effects; and (5) assess indirectness of evidence to disadvantaged populations and/or settings. The most important priority for research on health inequity and guidelines is to identify and document examples where health equity has been considered explicitly in guidelines. Although there is a weak scientific evidence base for assessing health equity, this should not discourage the explicit consideration of how guidelines and recommendations affect the most vulnerable members of society. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Tomlinson, Mathew J; Naeem, Asad
2018-03-21
CASA has been used in reproductive medicine and pathology laboratories for over 25 years, yet the 'fertility industry' generally remains sceptical and has avoided automation, despite clear weaknesses in manual semen analysis. Early implementers had difficulty in validating CASA-Mot instruments against recommended manual methods (haemocytometer) due to the interference of seminal debris and non-sperm cells, which also affects the accuracy of grading motility. Both the inability to provide accurate sperm counts and a lack of consensus as to the value of sperm kinematic parameters appear to have continued to have a negative effect on CASA-Mot's reputation. One positive interpretation from earlier work is that at least one or more measures of sperm velocity adds clinical value to the semen analysis, and these are clearly more objective than any manual motility analysis. Moreover, recent CASA-Mot systems offer simple solutions to earlier problems in eliminating artefacts and have been successfully validated for sperm concentration; as a result, they should be viewed with more confidence in relation to motility grading. Sperm morphology and DNA testing both require an evidence-based consensus and a well-validated (reliable, reproducible) assay to be developed before automation of either can be of real clinical benefit.
A consensus definition of cataplexy in mouse models of narcolepsy.
Scammell, Thomas E; Willie, Jon T; Guilleminault, Christian; Siegel, Jerome M
2009-01-01
People with narcolepsy often have episodes of cataplexy, brief periods of muscle weakness triggered by strong emotions. Many researchers are now studying mouse models of narcolepsy, but definitions of cataplexy-like behavior in mice differ across labs. To establish a common language, the International Working Group on Rodent Models of Narcolepsy reviewed the literature on cataplexy in people with narcolepsy and in dog and mouse models of narcolepsy and then developed a consensus definition of murine cataplexy. The group concluded that murine cataplexy is an abrupt episode of nuchal atonia lasting at least 10 seconds. In addition, theta activity dominates the EEG during the episode, and video recordings document immobility. To distinguish a cataplexy episode from REM sleep after a brief awakening, at least 40 seconds of wakefulness must precede the episode. Bouts of cataplexy fitting this definition are common in mice with disrupted orexin/hypocretin signaling, but these events almost never occur in wild type mice. It remains unclear whether murine cataplexy is triggered by strong emotions or whether mice remain conscious during the episodes as in people with narcolepsy. This working definition provides helpful insights into murine cataplexy and should allow objective and accurate comparisons of cataplexy in future studies using mouse models of narcolepsy.
I'm Not a Warmist! Transcending Ideological Barriers in Climate Communication (Invited)
NASA Astrophysics Data System (ADS)
Denning, S.
2013-12-01
A wealth of social science research has shown that public perception of climate change is very strongly colored by ideological filters in which facts are evaluated based on their fit to previously held beliefs. Scientific discourse about climate change is well received by environmentalism, which confirms the fears and competitive impulses of libertarianism. When data and belief come into conflict in public discourse, belief nearly always dominates. Scientists, educators, and science communicators must acknowledge the cultural context of climate change in order to lift climate discourse out of its ideological gutter. Many communication strategies emerging from solid social-science research fail to acknowledge the ideological cultural filters through which people experience climate discourse. Emphasizing recent trends, current weather events and impacts, and especially argument from authority of expertise and consensus are effective with average audiences but trigger reflexive opposition from suspicious listeners. Beyond ideology, climate change is Simple, Serious, and Solvable. Effective communication of these three key ideas can succeed when the science argument is carefully framed to avoid attack of the audience's ethical identity. Simple arguments from common sense and everyday experience are more successful than data. Serious consequences to values that resonate with the audience can be avoided by solutions that don't threaten those values.
Shack-Hartmann wavefront sensing based on binary-aberration-mode filtering.
Wang, Shuai; Yang, Ping; Xu, Bing; Dong, Lizhi; Ao, Mingwu
2015-02-23
Spot centroid detection is required by Shack-Hartmann wavefront sensing since the technique was first proposed. For a Shack-Hartmann wavefront sensor, the standard structure is to place a camera behind a lenslet array to record the image of spots. We proposed a new Shack-Hartmann wavefront sensing technique without using spot centroid detection. Based on the principle of binary-aberration-mode filtering, for each subaperture, only one light-detecting unit is used to measure the local wavefront slopes. It is possible to adopt single detectors in Shack-Hartmann wavefront sensor. Thereby, the method is able to gain noise benefits from using singe detectors behind each subaperture when used for sensing rapid varying wavefront in weak light. Moreover, due to non-discrete pixel imaging, this method is a potential solution for high measurement precision with fewer detecting units. Our simulations demonstrate the validity of the theoretical model. In addition, the results also indicate the advantage in measurement accuracy.
Zhang, Ying; Ye, Chengsong; Gong, Song; Wei, Gu; Yu, Xin; Feng, Lin
2013-04-01
A comprehensive study on formation and characteristics of soluble microbial products (SMP) during drinking water biofiltration was made in four parallel pilot-scale ceramic biofilters with acetate as the substrate. Excellent treatment performance was achieved while microbial biomass and acetate carbon both declined with the depth of filter. The SMP concentration was determined by calculating the difference between the concentration of dissolved organic carbon (DOC), biodegradable dissolved organic carbon (BDOC) and acetate carbon. The results revealed that SMP showed an obvious increase from 0 to 100 cm depth of the filter. A rising specific ultraviolet absorbance (SUVA) was also found, indicating that benzene or carbonyl might exist in these compounds. SMP produced during this drinking water biological process were proved to have weak mutagenicity and were not precursors of by-products of chlorination disinfection. The volatile parts of SMP were half-quantity analyzed and most of them were dicarboxyl acids, others were hydrocarbons or benzene with 16-17 carbon atoms.
What it takes to invade grassland ecosystems: traits, introduction history and filtering processes
Carboni, Marta; Münkemüller, Tamara; Lavergne, Sébastien; Choler, Philippe; Borgy, Benjamin; Violle, Cyrille; Essl, Franz; Roquet, Cristina; Munoz, François; Consortium, DivGrass; Thuiller, Wilfried
2016-01-01
Whether the success of alien species can be explained by their functional or phylogenetic characteristics remains unresolved because of data limitations, scale issues and weak quantifications of success. Using permanent grasslands across France (50,000 vegetation-plots, 2000 species, 130 aliens) and building on the Rabinowitz’ classification to quantify spread, we showed that phylogenetic and functional similarities to natives were the most important correlates of invasion success compared to intrinsic functional characteristics and introduction history. Results contrasted between spatial scales and components of invasion success. Widespread and common aliens were similar to co-occurring natives at coarse scales (indicating environmental filtering), but dissimilar at finer scales (indicating local competition). In contrast, regionally widespread but locally rare aliens showed patterns of competitive exclusion already at coarse scale. Quantifying trait differences between aliens and natives and distinguishing the components of invasion success improved our ability to understand and potentially predict alien spread at multiple scales. PMID:26689431
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayes, Randall L.; Pacini, Benjamin Robert; Roettgen, Dan
2016-01-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pacini, Benjamin Robert; Mayes, Randall L.; Roettgen, Daniel R
2015-10-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
Application of adaptive filters in denoising magnetocardiogram signals
NASA Astrophysics Data System (ADS)
Khan, Pathan Fayaz; Patel, Rajesh; Sengottuvel, S.; Saipriya, S.; Swain, Pragyna Parimita; Gireesan, K.
2017-05-01
Magnetocardiography (MCG) is the measurement of weak magnetic fields from the heart using Superconducting QUantum Interference Devices (SQUID). Though the measurements are performed inside magnetically shielded rooms (MSR) to reduce external electromagnetic disturbances, interferences which are caused by sources inside the shielded room could not be attenuated. The work presented here reports the application of adaptive filters to denoise MCG signals. Two adaptive noise cancellation approaches namely least mean squared (LMS) algorithm and recursive least squared (RLS) algorithm are applied to denoise MCG signals and the results are compared. It is found that both the algorithms effectively remove noisy wiggles from MCG traces; significantly improving the quality of the cardiac features in MCG traces. The calculated signal-to-noise ratio (SNR) for the denoised MCG traces is found to be slightly higher in the LMS algorithm as compared to the RLS algorithm. The results encourage the use of adaptive techniques to suppress noise due to power line frequency and its harmonics which occur frequently in biomedical measurements.
Magneto-ballistic transport in GaN nanowires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santoruvo, Giovanni, E-mail: giovanni.santoruvo@epfl.ch; Allain, Adrien; Ovchinnikov, Dmitry
2016-09-05
The ballistic filtering property of nanoscale crosses was used to investigate the effect of perpendicular magnetic fields on the ballistic transport of electrons on wide band-gap GaN heterostructures. The straight scattering-less trajectory of electrons was modified by a perpendicular magnetic field which produced a strong non-linear behavior in the measured output voltage of the ballistic filters and allowed the observation of semi-classical and quantum effects, such as quenching of the Hall resistance and manifestation of the last plateau, in excellent agreement with the theoretical predictions. A large measured phase coherence length of 190 nm allowed the observation of universal quantum fluctuationsmore » and weak localization of electrons due to quantum interference up to ∼25 K. This work also reveals the prospect of wide band-gap GaN semiconductors as a platform for basic transport and quantum studies, whose properties allow the investigation of ballistic transport and quantum phenomena at much larger voltages and temperatures than in other semiconductors.« less
Waveform Analysis Optimization for the 45Ca Beta Decay Experiment
NASA Astrophysics Data System (ADS)
Whitehead, Ryan; 45Ca Collaboration
2017-09-01
The 45Ca experiment is searching for a non-zero Fierz interference term, which would imply a tensor type contribution to the low-energy weak interaction, possibly signaling Beyond-the-Standard-Model (BSM) physics. Beta spectrum measurements are being performed at LANL, using the segmented, large area, Si detectors developed for the Nab and UCNB experiments. 109 events have been recorded, with 38 of the 254 pixels instrumented, during the summers of 2016 and 2017. An important step to extracting the energy spectra is the correction of the waveform for pile-up events. A set of analysis tools has been developed to address this issue. A trapezoidal filter has been characterized and optimized for the experimental waveforms. This filter is primarily used for energy extraction, but, by adjusting certain parameters, it has been modified to identify pile-up events. The efficiency varies with the total energy of the particle and the amount deposited with each detector interaction. Preliminary results of this analysis will be presented.
Continuous removal of ore floatation reagents by an anaerobic-aerobic biological filter.
Cheng, Huang; Lin, Hai; Huo, Hanxin; Dong, Yingbo; Xue, Qiuyu; Cao, Lixia
2012-06-01
A laboratory scale up-flow anaerobic-aerobic biological filter was constructed to treat synthetic ore floatation wastewater. Volcanic stone was applied as packing media for aerobic section. Biodegradation of some common ore floatation reagents as potassium ethyl xanthate dithiophosphate and turpentine were evaluated. An average COD reduction rate of 88.7% for potassium ethyl xanthate by the biofilter was obtained at HRT of 6h, air water flow ratio of 10:1 and pH of 7. Its effluent COD concentration varied between 17 and 43 mg/L. Xanthates and dithiophosphate were found to be easily biodegradable, whereas turpentine was not favorable for microorganism to digest. The performance of the reactor fluctuated slightly within the temperature range of 10-35 °C. Operation of the biofilter was sensitive to influent pH values. A neutral to weak basic influent was preferred for biofilter to maintain an efficient operation. Anaerobic treatment was able to enhance the biodegradability of influents significantly. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sexual selection protects against extinction.
Lumley, Alyson J; Michalczyk, Łukasz; Kitson, James J N; Spurgin, Lewis G; Morrison, Catriona A; Godwin, Joanne L; Dickinson, Matthew E; Martin, Oliver Y; Emerson, Brent C; Chapman, Tracey; Gage, Matthew J G
2015-06-25
Reproduction through sex carries substantial costs, mainly because only half of sexual adults produce offspring. It has been theorized that these costs could be countered if sex allows sexual selection to clear the universal fitness constraint of mutation load. Under sexual selection, competition between (usually) males and mate choice by (usually) females create important intraspecific filters for reproductive success, so that only a subset of males gains paternity. If reproductive success under sexual selection is dependent on individual condition, which is contingent to mutation load, then sexually selected filtering through 'genic capture' could offset the costs of sex because it provides genetic benefits to populations. Here we test this theory experimentally by comparing whether populations with histories of strong versus weak sexual selection purge mutation load and resist extinction differently. After evolving replicate populations of the flour beetle Tribolium castaneum for 6 to 7 years under conditions that differed solely in the strengths of sexual selection, we revealed mutation load using inbreeding. Lineages from populations that had previously experienced strong sexual selection were resilient to extinction and maintained fitness under inbreeding, with some families continuing to survive after 20 generations of sib × sib mating. By contrast, lineages derived from populations that experienced weak or non-existent sexual selection showed rapid fitness declines under inbreeding, and all were extinct after generation 10. Multiple mutations across the genome with individually small effects can be difficult to clear, yet sum to a significant fitness load; our findings reveal that sexual selection reduces this load, improving population viability in the face of genetic stress.
Three-dimensional seismic depth migration
NASA Astrophysics Data System (ADS)
Zhou, Hongbo
1998-12-01
One-pass 3-D modeling and migration for poststack seismic data may be implemented by replacing the traditional 45sp° one-way wave equation (a third-order partial differential equation) with a pair of second and first order partial differential equations. Except for an extra correction term, the resulting second order equation has a form similar to Claerbout's 15sp° one-way wave equation, which is known to have a nearly circular horizontal impulse response. In this approach, there is no need to compensate for splitting errors. Numerical tests on synthetic data show that this algorithm has the desirable attributes of being second-order in accuracy and economical to solve. A modification of the Crank-Nicholson implementation maintains stability. Absorbing boundary conditions play an important role in one-way wave extrapolations by reducing reflections at grid edges. Clayton and Engquist's 2-D absorbing boundary conditions for one-way wave extrapolation by depth-stepping in the frequency domain are extended to 3-D using paraxial approximations of the scalar wave equation. Internal consistency is retained by incorporating the interior extrapolation equation with the absorbing boundary conditions. Numerical schemes are designed to make the proposed absorbing boundary conditions both mathematically correct and efficient with negligible extra cost. Synthetic examples illustrate the effectiveness of the algorithm for extrapolation with the 3-D 45sp° one-way wave equation. Frequency-space domain Butterworth and Chebyshev dip filters are implemented. By regrouping the product terms in the filter transfer function into summations, a cascaded (serial) Butterworth dip filter can be made parallel. A parallel Chebyshev dip filter can be similarly obtained, and has the same form as the Butterworth filter; but has different coeffcients. One of the advantages of the Chebyshev filter is that it has a sharper transition zone than that of Butterworth filter of the same order. Both filters are incorporated into 3-D one-way frequency-space depth migration for evanescent energy removal and for phase compensation of splitting errors; a single filter achieves both goals. Synthetic examples illustrate the behavior of the parallel filters. For a given order of filter, the cost of the Butterworth and Chebyshev filters is the same. A Chebyshev filter is more effective for phase compensation than the Butterworth filter of the same order, at the expense of some wavenumber-dependent amplitude ripples. An analytical formula for geometrical spreading is derived for a horizontally layered transversely isotropic medium with a vertical symmetry axis. Under this expression, geometrical spreading can be determined only by the anisotropic parameters in the first layer, the traveltime derivatives, and source-receiver offset. An explicit, numerically feasible expression for geometrical spreading can be further obtained by considering some of the special cases of transverse isotropy, such as weak anisotropy or elliptic anisotropy. Therefore, with the techniques of non-hyerbolic moveout for transverse isotropic media, geometrical spreading can be calculated by using picked traveltimes of primary P-wave reflections without having to know the actual parameters in the deeper subsurface; no ray tracing is needed. Synthetic examples verify the algorithm and show that it is numerically feasible for calculation of geometrical spreading.
Mokkink, Lidwine B; Prinsen, Cecilia A C; Bouter, Lex M; Vet, Henrica C W de; Terwee, Caroline B
2016-01-19
COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) is an initiative of an international multidisciplinary team of researchers who aim to improve the selection of outcome measurement instruments both in research and in clinical practice by developing tools for selecting the most appropriate available instrument. In this paper these tools are described, i.e. the COSMIN taxonomy and definition of measurement properties; the COSMIN checklist to evaluate the methodological quality of studies on measurement properties; a search filter for finding studies on measurement properties; a protocol for systematic reviews of outcome measurement instruments; a database of systematic reviews of outcome measurement instruments; and a guideline for selecting outcome measurement instruments for Core Outcome Sets in clinical trials. Currently, we are updating the COSMIN checklist, particularly the standards for content validity studies. Also new standards for studies using Item Response Theory methods will be developed. Additionally, in the future we want to develop standards for studies on the quality of non-patient reported outcome measures, such as clinician-reported outcomes and performance-based outcomes. In summary, we plea for more standardization in the use of outcome measurement instruments, for conducting high quality systematic reviews on measurement instruments in which the best available outcome measurement instrument is recommended, and for stopping the use of poor outcome measurement instruments.
Mokkink, Lidwine B.; Prinsen, Cecilia A. C.; Bouter, Lex M.; de Vet, Henrica C. W.; Terwee, Caroline B.
2016-01-01
Background: COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) is an initiative of an international multidisciplinary team of researchers who aim to improve the selection of outcome measurement instruments both in research and in clinical practice by developing tools for selecting the most appropriate available instrument. Method: In this paper these tools are described, i.e. the COSMIN taxonomy and definition of measurement properties; the COSMIN checklist to evaluate the methodological quality of studies on measurement properties; a search filter for finding studies on measurement properties; a protocol for systematic reviews of outcome measurement instruments; a database of systematic reviews of outcome measurement instruments; and a guideline for selecting outcome measurement instruments for Core Outcome Sets in clinical trials. Currently, we are updating the COSMIN checklist, particularly the standards for content validity studies. Also new standards for studies using Item Response Theory methods will be developed. Additionally, in the future we want to develop standards for studies on the quality of non-patient reported outcome measures, such as clinician-reported outcomes and performance-based outcomes. Conclusions: In summary, we plea for more standardization in the use of outcome measurement instruments, for conducting high quality systematic reviews on measurement instruments in which the best available outcome measurement instrument is recommended, and for stopping the use of poor outcome measurement instruments. PMID:26786084
Cascading activation from lexical processing to letter-level processing in written word production.
Buchwald, Adam; Falconer, Carolyn
2014-01-01
Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.
Faithful conditional quantum state transfer between weakly coupled qubits
NASA Astrophysics Data System (ADS)
Miková, M.; Straka, I.; Mičuda, M.; Krčmarský, V.; Dušek, M.; Ježek, M.; Fiurášek, J.; Filip, R.
2016-08-01
One of the strengths of quantum information theory is that it can treat quantum states without referring to their particular physical representation. In principle, quantum states can be therefore fully swapped between various quantum systems by their mutual interaction and this quantum state transfer is crucial for many quantum communication and information processing tasks. In practice, however, the achievable interaction time and strength are often limited by decoherence. Here we propose and experimentally demonstrate a procedure for faithful quantum state transfer between two weakly interacting qubits. Our scheme enables a probabilistic yet perfect unidirectional transfer of an arbitrary unknown state of a source qubit onto a target qubit prepared initially in a known state. The transfer is achieved by a combination of a suitable measurement of the source qubit and quantum filtering on the target qubit depending on the outcome of measurement on the source qubit. We experimentally verify feasibility and robustness of the transfer using a linear optical setup with qubits encoded into polarization states of single photons.
NASA Astrophysics Data System (ADS)
Wu, Shan; Burlingame, Quinn; Lin, Minren; Zhang, Qiming
2013-03-01
There is an increasing demand on dielectric materials with high electric energy density and low loss for a broad range of applications in modern electronics and electrical power systems such as hybrid electric vehicles (HEV), medical defibrillators, filters, and switched-mode power supplies. One major challenge in developing dielectric polymers is how to achieve high energy density Ue while maintaining low dielectric loss, even at very high-applied electric fields. Here we show that amorphous polar-polymers with very low impurity concentration can be promising for realizing such a dielectric polymer. Polar-polymer with high dipole moment and weak dipole coupling can provide relatively high dielectric constant for high Ue, eliminate polarization and conduction losses due to weak dipolar coupling and strong polar-scattering to charge carriers. Indeed, an aromatic polythiourea thin film can maintain low loss to high fields (>1 GV/m) with a high Ue (~ 24 J/cm3) , which is very attractive for energy storage capacitors.
NASA Astrophysics Data System (ADS)
Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe
2016-11-01
For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.
San Segundo, Eugenia; Tsanas, Athanasios; Gómez-Vilda, Pedro
2017-01-01
There is a growing consensus that hybrid approaches are necessary for successful speaker characterization in Forensic Speaker Comparison (FSC); hence this study explores the forensic potential of voice features combining source and filter characteristics. The former relate to the action of the vocal folds while the latter reflect the geometry of the speaker's vocal tract. This set of features have been extracted from pause fillers, which are long enough for robust feature estimation while spontaneous enough to be extracted from voice samples in real forensic casework. Speaker similarity was measured using standardized Euclidean Distances (ED) between pairs of speakers: 54 different-speaker (DS) comparisons, 54 same-speaker (SS) comparisons and 12 comparisons between monozygotic twins (MZ). Results revealed that the differences between DS and SS comparisons were significant in both high quality and telephone-filtered recordings, with no false rejections and limited false acceptances; this finding suggests that this set of voice features is highly speaker-dependent and therefore forensically useful. Mean ED for MZ pairs lies between the average ED for SS comparisons and DS comparisons, as expected according to the literature on twin voices. Specific cases of MZ speakers with very high ED (i.e. strong dissimilarity) are discussed in the context of sociophonetic and twin studies. A preliminary simplification of the Vocal Profile Analysis (VPA) Scheme is proposed, which enables the quantification of voice quality features in the perceptual assessment of speaker similarity, and allows for the calculation of perceptual-acoustic correlations. The adequacy of z-score normalization for this study is also discussed, as well as the relevance of heat maps for detecting the so-called phantoms in recent approaches to the biometric menagerie. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Beaton, Dorcas E; Dyer, Sarah; Boonen, Annelies; Verstappen, Suzanne M M; Escorpizo, Reuben; Lacaille, Diane V; Bosworth, Ailsa; Gignac, Monique A M; Leong, Amye; Purcaru, Oana; Leggett, Sarah; Hofstetter, Cathy; Peterson, Ingemar F; Tang, Kenneth; Fautrel, Bruno; Bombardier, Claire; Tugwell, Peter S
2016-01-01
Indicators of work role functioning (being at work, and being productive while at work) are important outcomes for persons with arthritis. As the worker productivity working group at OMERACT (Outcome Measures in Rheumatology), we sought to provide an evidence base for consensus on standardized instruments to measure worker productivity [both absenteeism and at-work productivity (presenteeism) as well as critical contextual factors]. Literature reviews and primary studies were done and reported to the OMERACT 12 (2014) meeting to build the OMERACT Filter 2.0 evidence for worker productivity outcome measurement instruments. Contextual factor domains that could have an effect on scores on worker productivity instruments were identified by nominal group techniques, and strength of influence was further assessed by literature review. At OMERACT 9 (2008), we identified 6 candidate measures of absenteeism, which received 94% endorsement at the plenary vote. At OMERACT 11 (2012) we received over the required minimum vote of 70% for endorsement of 2 at-work productivity loss measures. During OMERACT 12 (2014), out of 4 measures of at-work productivity loss, 3 (1 global; 2 multiitem) received support as having passed the OMERACT Filter with over 70% of the plenary vote. In addition, 3 contextual factor domains received a 95% vote to explore their validity as core contextual factors: nature of work, work accommodation, and workplace support. Our current recommendations for at-work productivity loss measures are: WALS (Workplace Activity Limitations Scale), WLQ PDmod (Work Limitations Questionnaire with modified physical demands scale), WAI (Work Ability Index), WPS (Arthritis-specific Work Productivity Survey), and WPAI (Work Productivity and Activity Impairment Questionnaire). Our future research focus will shift to confirming core contextual factors to consider in the measurement of worker productivity.
Raman spectra of adsorbed layers on space shuttle and AOTV thermal protection system surface
NASA Technical Reports Server (NTRS)
Willey, Ronald J.
1987-01-01
Surfaces of interest to space vehicle heat shield design were struck by a 2 W argon ion laser line while subjected to supersonic arc jet flow conditions. Emission spectra were taken at 90 deg to the angle of laser incidence on the test object. Results showed possible weak Raman shifts which could not be directly tied to any particular parameter such as surface temperature or gas composition. The investigation must be considered exploratory in terms of findings. Many undesirable effects were found and corrected as the project progressed. For instance, initial spectra settings led to ghosts which were eliminated by closing the intermediate of filter slit of the Spex from 8 to 3 mm. Further, under certain conditions, plasma lines from the laser were observed. Several materials were also investigated at room temperature for Raman shifts. Results showed Raman shifts for RCC and TEOS coated materials. The HRSI materials showed only weak Raman shifts, however, substantial efforts were made in studying these materials. Baseline materials showed the technique to be sound. The original goal was to find a Raman shift for the High-temperature Reusable Surface Insulation (HRSI) Reaction Cured borosilicate Glass (RCG) coated material and tie the amplitude of this peak to Arc jet conditions. Weak Raman shifts may be present, however, time limitations prevented confirmation.
Yoshida, Kenta; Shimodaira, Masaki; Toyama, Takeshi; Shimizu, Yasuo; Inoue, Koji; Yoshiie, Toshimasa; Milan, Konstantinovic J; Gerard, Robert; Nagai, Yasuyoshi
2017-04-01
To evaluate dislocations induced by neutron irradiation, we developed a weak-beam scanning transmission electron microscopy (WB-STEM) system by installing a novel beam selector, an annular detector, a high-speed CCD camera and an imaging filter in the camera chamber of a spherical aberration-corrected transmission electron microscope. The capabilities of the WB-STEM with respect to wide-view imaging, real-time diffraction monitoring and multi-contrast imaging are demonstrated using typical reactor pressure vessel steel that had been used in an European nuclear reactor for 30 years as a surveillance test piece with a fluence of 1.09 × 1020 neutrons cm-2. The quantitatively measured size distribution (average loop size = 3.6 ± 2.1 nm), number density of the dislocation loops (3.6 × 1022 m-3) and dislocation density (7.8 × 1013 m m-3) were carefully compared with the values obtained via conventional weak-beam transmission electron microscopy studies. In addition, cluster analysis using atom probe tomography (APT) further demonstrated the potential of the WB-STEM for correlative electron tomography/APT experiments. © The Author 2017. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
Calibration of colour gradient bias in shear measurement using HST/CANDELS data
NASA Astrophysics Data System (ADS)
Er, X.; Hoekstra, H.; Schrabback, T.; Cardone, V. F.; Scaramella, R.; Maoli, R.; Vicinanza, M.; Gillis, B.; Rhodes, J.
2018-06-01
Accurate shape measurements are essential to infer cosmological parameters from large area weak gravitational lensing studies. The compact diffraction-limited point spread function (PSF) in space-based observations is greatly beneficial, but its chromaticity for a broad-band observation can lead to new subtle effects that could hitherto be ignored: the PSF of a galaxy is no longer uniquely defined and spatial variations in the colours of galaxies result in biases in the inferred lensing signal. Taking Euclid as a reference, we show that this colour gradient bias (CG bias) can be quantified with high accuracy using available multicolour Hubble Space Telescope (HST) data. In particular we study how noise in the HST observations might impact such measurements and find this to be negligible. We determine the CG bias using HST observations in the F606W and F814W filters and observe a correlation with the colour, in line with expectations, whereas the dependence with redshift is weak. The biases for individual galaxies are generally well below 1 per cent, which may be reduced further using morphological information from the Euclid data. Our results demonstrate that CG bias should not be ignored, but it is possible to determine its amplitude with sufficient precision, so that it will not significantly bias the weak lensing measurements using Euclid data.
Oatts, Thomas J; Hicks, Cheryl E; Adams, Amy R; Brisson, Michael J; Youmans-McDonald, Linda D; Hoover, Mark D; Ashley, Kevin
2012-02-01
Occupational sampling and analysis for multiple elements is generally approached using various approved methods from authoritative government sources such as the National Institute for Occupational Safety and Health (NIOSH), the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA), as well as consensus standards bodies such as ASTM International. The constituents of a sample can exist as unidentified compounds requiring sample preparation to be chosen appropriately, as in the case of beryllium in the form of beryllium oxide (BeO). An interlaboratory study was performed to collect analytical data from volunteer laboratories to examine the effectiveness of methods currently in use for preparation and analysis of samples containing calcined BeO powder. NIST SRM(®) 1877 high-fired BeO powder (1100 to 1200 °C calcining temperature; count median primary particle diameter 0.12 μm) was used to spike air filter media as a representative form of beryllium particulate matter present in workplace sampling that is known to be resistant to dissolution. The BeO powder standard reference material was gravimetrically prepared in a suspension and deposited onto 37 mm mixed cellulose ester air filters at five different levels between 0.5 μg and 25 μg of Be (as BeO). Sample sets consisting of five BeO-spiked filters (in duplicate) and two blank filters, for a total of twelve unique air filter samples per set, were submitted as blind samples to each of 27 participating laboratories. Participants were instructed to follow their current process for sample preparation and utilize their normal analytical methods for processing samples containing substances of this nature. Laboratories using more than one sample preparation and analysis method were provided with more than one sample set. Results from 34 data sets ultimately received from the 27 volunteer laboratories were subjected to applicable statistical analyses. The observed performance data show that sample preparations using nitric acid alone, or combinations of nitric and hydrochloric acids, are not effective for complete extraction of Be from the SRM 1877 refractory BeO particulate matter spiked on air filters; but that effective recovery can be achieved by using sample preparation procedures utilizing either sulfuric or hydrofluoric acid, or by using methodologies involving ammonium bifluoride with heating. Laboratories responsible for quantitative determination of Be in workplace samples that may contain high-fired BeO should use quality assurance schemes that include BeO-spiked sampling media, rather than solely media spiked with soluble Be compounds, and should ensure that methods capable of quantitative digestion of Be from the actual material present are used.
Scintillation index of higher order mode laser beams in strong turbulence
NASA Astrophysics Data System (ADS)
Baykal, Yahya
2017-03-01
The scintillation index of higher order laser modes is examined in strong atmospheric turbulence. In our formulation, modified Rytov theory is employed with the inclusion of existing modified turbulence spectrum which presents the atmospheric turbulence spectrum as a linear filter having refractive and diffractive spatial frequency cutoffs. Variations of the scintillation index in strong atmospheric turbulence are shown against the weak turbulence plane wave scintillation index for various higher order laser modes of different sizes. Use of higher order modes in optical wireless communication links operating in strongly turbulent atmosphere is found to be advantageous in reducing the scintillation noise.
Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology
NASA Technical Reports Server (NTRS)
Larson, Stephen M.; Slaughter, Charles D.
1992-01-01
Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.
Notes from the Field: Acute Mercury Poisoning After Home Gold and Silver Smelting--Iowa, 2014.
Koirala, Samir; Leinenkugel, Kathy
2015-12-18
In March 2014, a man, aged 59 years, who lived alone and had been using different smelting techniques viewed on the Internet to recover gold and silver from computer components, was evaluated at a local emergency department for shortness of breath, tremors, anorexia, and generalized weakness. During the smelting processes, he had used hydrogen peroxide, nitric acid, muriatic acid, and sulfuric acid purchased from local stores or Internet retailers. For protection, he wore a military gas mask of unknown type. The mask was used with filter cartridges, but their effectiveness against chemical fumes was not known.
Core outcome domains for clinical trials in non-specific low back pain.
Chiarotto, Alessandro; Deyo, Richard A; Terwee, Caroline B; Boers, Maarten; Buchbinder, Rachelle; Corbin, Terry P; Costa, Leonardo O P; Foster, Nadine E; Grotle, Margreth; Koes, Bart W; Kovacs, Francisco M; Lin, Chung-Wei Christine; Maher, Chris G; Pearson, Adam M; Peul, Wilco C; Schoene, Mark L; Turk, Dennis C; van Tulder, Maurits W; Ostelo, Raymond W
2015-06-01
Inconsistent reporting of outcomes in clinical trials of patients with non-specific low back pain (NSLBP) hinders comparison of findings and the reliability of systematic reviews. A core outcome set (COS) can address this issue as it defines a minimum set of outcomes that should be reported in all clinical trials. In 1998, Deyo et al. recommended a standardized set of outcomes for LBP clinical research. The aim of this study was to update these recommendations by determining which outcome domains should be included in a COS for clinical trials in NSLBP. An International Steering Committee established the methodology to develop this COS. The OMERACT Filter 2.0 framework was used to draw a list of potential core domains that were presented in a Delphi study. Researchers, care providers and patients were invited to participate in three Delphi rounds and were asked to judge which domains were core. A priori criteria for consensus were established before each round and were analysed together with arguments provided by panellists on importance, overlap, aggregation and/or addition of potential core domains. The Steering Committee discussed the final results and made final decisions. A set of 280 experts was invited to participate in the Delphi; response rates in the three rounds were 52, 50 and 45%. Of 41 potential core domains presented in the first round, 13 had sufficient support to be presented for rating in the third round. Overall consensus was reached for the inclusion of three domains in this COS: 'physical functioning', 'pain intensity' and 'health-related quality of life'. Consensus on 'physical functioning' and 'pain intensity' was consistent across all stakeholders, 'health-related quality of life' was not supported by the patients, and all the other domains were not supported by two or more groups of stakeholders. Weighting all possible argumentations, the Steering Committee decided to include in the COS the three domains that reached overall consensus and the domain 'number of deaths'. The following outcome domains were included in this updated COS: 'physical functioning', 'pain intensity', 'health-related quality of life' and 'number of deaths'. The next step for the development of this COS will be to determine which measurement instruments best measure these domains.
NASA Astrophysics Data System (ADS)
Jagodzinski, Jeremy James
2007-12-01
The development to date of a diode-laser based velocimeter providing point-velocity-measurements in unseeded flows using molecular Rayleigh scattering is discussed. The velocimeter is based on modulated filtered Rayleigh scattering (MFRS), a novel variation of filtered Rayleigh scattering (FRS), utilizing modulated absorption spectroscopy techniques to detect a strong absorption of a relatively weak Rayleigh scattered signal. A rubidium (Rb) vapor filter is used to provide the relatively strong absorption; alkali metal vapors have a high optical depth at modest vapor pressures, and their narrow linewidth is ideally suited for high-resolution velocimetry. Semiconductor diode lasers are used to generate the relatively weak Rayleigh scattered signal; due to their compact, rugged construction diode lasers are ideally suited for the environmental extremes encountered in many experiments. The MFRS technique utilizes the frequency-tuning capability of diode lasers to implement a homodyne detection scheme using lock-in amplifiers. The optical frequency of the diode-based laser system used to interrogate the flow is rapidly modulated about a reference frequency in the D2-line of Rb. The frequency modulation is imposed on the Rayleigh scattered light that is collected from the probe volume in the flow under investigation. The collected frequency modulating Rayleigh scattered light is transmitted through a Rb vapor filter before being detected. The detected modulated absorption signal is fed to two lock-in amplifers synchronized with the modulation frequency of the source laser. High levels of background rejection are attained since the lock-ins are both frequency and phase selective. The two lock-in amplifiers extract different Fourier components of the detected modulated absorption signal, which are ratioed to provide an intensity normalized frequency dependent signal from a single detector. A Doppler frequency shift in the collected Rayleigh scattered light due to a change in the velocity of the flow under investigation results in a change in the detected modulated absorption signal. This change in the detected signal provides a quantifiable measure of the Doppler frequency shift, and hence the velocity in the probe volume, provided that the laser source exhibits acceptable levels of frequency stability (determined by the magnitude of the velocities being measured). An extended cavity diode laser (ECDL) in the Littrow configuration provides frequency tunable, relatively narrow-linewidth lasing for the MFRS velocimeter. Frequency stabilization of the ECDL is provided by a proportional-integral-differential (PID) controller based on an error signal in the reference arm of the experiment. The optical power of the Littrow laser source is amplified by an antireflection coated (AR coated) broad stripe diode laser. The single-mode, frequency-modulatable, frequency-stable O(50 mW) of optical power provided by this extended cavity diode laser master oscillator power amplifier (ECDL-MOPA) system provided sufficient scattering signal from a condensing jet of CO2 to implement the MFRS technique in the frequency-locked mode of operation.
The Principle of Energetic Consistency
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.
Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel
2015-01-25
In this work, Raman hyperspectral images and multivariate curve resolution-alternating least squares (MCR-ALS) are used to study the distribution of actives and excipients within a pharmaceutical drug product. This article is mainly focused on the distribution of a low dose constituent. Different approaches are compared, using initially filtered or non-filtered data, or using a column-wise augmented dataset before starting the MCR-ALS iterative process including appended information on the low dose component. In the studied formulation, magnesium stearate is used as a lubricant to improve powder flowability. With a theoretical concentration of 0.5% (w/w) in the drug product, the spectral variance contained in the data is weak. By using a principal component analysis (PCA) filtered dataset as a first step of the MCR-ALS approach, the lubricant information is lost in the non-explained variance and its associated distribution in the tablet cannot be highlighted. A sufficient number of components to generate the PCA noise-filtered matrix has to be used in order to keep the lubricant variability within the data set analyzed or, otherwise, work with the raw non-filtered data. Different models are built using an increasing number of components to perform the PCA reduction. It is shown that the magnesium stearate information can be extracted from a PCA model using a minimum of 20 components. In the last part, a column-wise augmented matrix, including a reference spectrum of the lubricant, is used before starting MCR-ALS process. PCA reduction is performed on the augmented matrix, so the magnesium stearate contribution is included within the MCR-ALS calculations. By using an appropriate PCA reduction, with a sufficient number of components, or by using an augmented dataset including appended information on the low dose component, the distribution of the two actives, the two main excipients and the low dose lubricant are correctly recovered. Copyright © 2014 Elsevier B.V. All rights reserved.
Li, Qingguo
2017-01-01
With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method. PMID:29283432
A Test of the Interstellar Boundary EXplorer Ribbon Formation in the Outer Heliosheath
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamayunov, Konstantin V.; Rassoul, Hamid; Heerikhuisen, Jacob, E-mail: kgamayunov@fit.edu
NASA’s Interstellar Boundary EXplorer ( IBEX ) mission is imaging energetic neutral atoms (ENAs) propagating to Earth from the outer heliosphere and local interstellar medium (LISM). A dominant feature in all ENA maps is a ribbon of enhanced fluxes that was not predicted before IBEX . While more than a dozen models of the ribbon formation have been proposed, consensus has gathered around the so-called secondary ENA model. Two classes of secondary ENA models have been proposed; the first class assumes weak scattering of the energetic pickup protons in the LISM, and the second class assumes strong but spatially localizedmore » scattering. Here we present a numerical test of the “weak scattering” version of the secondary ENA model using our gyro-averaged kinetic model for the evolution of the phase-space distribution of protons in the outer heliosheath. As input for our test, we use distributions of the primary ENAs from our MHD-plasma/kinetic-neutral model of the heliosphere-LISM interaction. The magnetic field spectrum for the large-scale interstellar turbulence and an upper limit for the amplitude of small-scale local turbulence (SSLT) generated by protons are taken from observations by Voyager 1 in the LISM. The hybrid simulations of energetic protons are also used to set the bounding wavenumbers for the spectrum of SSLT. Our test supports the “weak scattering” version. This makes an additional solid step on the way to understanding the origin and formation of the IBEX ribbon and thus to improving our understanding of the interaction between the heliosphere and the LISM.« less
Osth, Adam F; Jansson, Anna; Dennis, Simon; Heathcote, Andrew
2018-08-01
A robust finding in recognition memory is that performance declines monotonically across test trials. Despite the prevalence of this decline, there is a lack of consensus on the mechanism responsible. Three hypotheses have been put forward: (1) interference is caused by learning of test items (2) the test items cause a shift in the context representation used to cue memory and (3) participants change their speed-accuracy thresholds through the course of testing. We implemented all three possibilities in a combined model of recognition memory and decision making, which inherits the memory retrieval elements of the Osth and Dennis (2015) model and uses the diffusion decision model (DDM: Ratcliff, 1978) to generate choice and response times. We applied the model to four datasets that represent three challenges, the findings that: (1) the number of test items plays a larger role in determining performance than the number of studied items, (2) performance decreases less for strong items than weak items in pure lists but not in mixed lists, and (3) lexical decision trials interspersed between recognition test trials do not increase the rate at which performance declines. Analysis of the model's parameter estimates suggests that item interference plays a weak role in explaining the effects of recognition testing, while context drift plays a very large role. These results are consistent with prior work showing a weak role for item noise in recognition memory and that retrieval is a strong cause of context change in episodic memory. Copyright © 2018 Elsevier Inc. All rights reserved.
The rhizotoxicity of metal cations is related to their strength of binding to hard ligands.
Kopittke, Peter M; Menzies, Neal W; Wang, Peng; McKenna, Brigid A; Wehr, J Bernhard; Lombi, Enzo; Kinraide, Thomas B; Blamey, F Pax C
2014-02-01
Mechanisms whereby metal cations are toxic to plant roots remain largely unknown. Aluminum, for example, has been recognized as rhizotoxic for approximately 100 yr, but there is no consensus on its mode of action. The authors contend that the primary mechanism of rhizotoxicity of many metal cations is nonspecific and that the magnitude of toxic effects is positively related to the strength with which they bind to hard ligands, especially carboxylate ligands of the cell-wall pectic matrix. Specifically, the authors propose that metal cations have a common toxic mechanism through inhibiting the controlled relaxation of the cell wall as required for elongation. Metal cations such as Al(3+) and Hg(2+), which bind strongly to hard ligands, are toxic at relatively low concentrations because they bind strongly to the walls of cells in the rhizodermis and outer cortex of the root elongation zone with little movement into the inner tissues. In contrast, metal cations such as Ca(2+), Na(+), Mn(2+), and Zn(2+) , which bind weakly to hard ligands, bind only weakly to the cell wall and move farther into the root cylinder. Only at high concentrations is their weak binding sufficient to inhibit the relaxation of the cell wall. Finally, different mechanisms would explain why certain metal cations (for example, Tl(+), Ag(+), Cs(+), and Cu(2+)) are sometimes more toxic than expected through binding to hard ligands. The data presented in the present study demonstrate the importance of strength of binding to hard ligands in influencing a range of important physiological processes within roots through nonspecific mechanisms. © 2013 SETAC.
NASA Astrophysics Data System (ADS)
Skrutskie, Michael F.; Nelson, Matthew J.; Schmidt, Carl
2016-10-01
Fan Mountain Observatory, near Charlottesville, Virginia, is a dark-sky site that supports a number of telescopes including a 31-inch reflecting telescope equipped with a 1024x1024 HgCdTe 1-2.5 um (YJHK) imager. Reflected sunlight ordinarily overwhelms Io's comparatively weak K-band (2.0-2.4 um) volcanic emission in unresolved observations, however when Io is eclipsed in Jupiter's shadow even a small infrared-equipped telescope can detect Io's volcanic emission. The Fan Mountain Infrared Camera observed Io in eclipse at regular intervals, typically weekly, during the few months before and after Jupiter's March 2016 opposition. When in eclipse Io's Jupiter-facing hemisphere is oriented toward Earth with sub-Earth longitudes at the time of observation ranging from 345 - 360 degrees (pre-opposition) to 0 - 15 degrees (post-opposition). A K-band filter (2.04-2.42 um) provided a bulk measurement of Io's volcanic flux weighted largely toward the 2.4 um end of this filter given the typical 500K color temperature of the volcanic emission. Most epochs also included observation in a narrowband filter centered at 2.12 um that, when combined with the broadband "long" wavelength measurement, provided a proxy for color temperature. The K-band flux of Io varied by more than 2 magnitudes during the 7 month observation interval. The [2.12 um - K-band] color of the emission strongly correlated with the K-band flux in the expected sense that the color temperature of the emission increased when Io's broadband volcanic flux was the greatest. One epoch of TripleSpec near-IR Io eclipse spectroscopy (0.90 - 2.45 um; R~3000) from the Apache Point Observatory 3.5-meter telescope provided ground truth for transforming the filter photometry into quantitative temperatures.
Bas-relief map using texture analysis with application to live enhancement of ultrasound images.
Du, Huarui; Ma, Rui; Wang, Xiaoying; Zhang, Jue; Fang, Jing
2015-05-01
For ultrasound imaging, speckle is one of the most important factors in the degradation of contrast resolution because it masks meaningful texture and has the potential to interfere with diagnosis. It is expected that researchers would explore appropriate ways to reduce the speckle noise, to find the edges of structures and enhance weak borders between different organs in ultrasound imaging. Inspired by the principle of differential interference contrast microscopy, a "bas-relief map" is proposed that depicts the texture structure of ultrasound images. Based on a bas-relief map, an adaptive bas-relief filter was developed for ultrafast despeckling. Subsequently, an edge map was introduced to enhance the edges of images in real time. The holistic bas-relief map approach has been used experimentally with synthetic phantoms and digital ultrasound B-scan images of liver, kidney and gallbladder. Based on the visual inspection and the performance metrics of the despeckled images, it was found that the bas-relief map approach is capable of effectively reducing the speckle while significantly enhancing contrast and tissue boundaries for ultrasonic images, and its speckle reduction ability is comparable to that of Kuan, Lee and Frost filters. Meanwhile, the proposed technique could preserve more intra-region details compared with the popular speckle reducing anisotropic diffusion technique and more effectively enhance edges. In addition, the adaptive bas-relief filter was much less time consuming than the Kuan, Lee and Frost filter and speckle reducing anisotropic diffusion techniques. The bas-relief map strategy is effective for speckle reduction and live enhancement of ultrasound images, and can provide a valuable tool for clinical diagnosis. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, G
Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth,more » Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.« less
Gaspar, Lorena R; Tharmann, Julian; Maia Campos, Patricia M B G; Liebsch, Manfred
2013-02-01
The aim of this study was to evaluate the in vitro skin phototoxicity of cosmetic formulations containing photounstable and photostable UV-filters and vitamin A palmitate, assessed by two in vitro techniques: 3T3 Neutral Red Uptake Phototoxicity Test and Human 3-D Skin Model In Vitro Phototoxicity Test. For this, four different formulations containing vitamin A palmitate and different UV-filters combinations, two of them considered photostable and two of them considered photounstable, were prepared. Solutions of each UV-filter and vitamin under study and solutions of four different combinations under study were also prepared. The phototoxicity was assessed in vitro by the 3T3 NRU phototoxicity test (3T3-NRU-PT) and subsequently in a phototoxicity test on reconstructed human skin model (H3D-PT). Avobenzone presented a pronounced phototoxicity and vitamin A presented a tendency to a weak phototoxic potential. A synergistic effect of vitamin A palmitate on the phototoxicity of combinations containing avobenzone was observed. H3D-PT results did not confirm the positive 3T3-NRU-PT results. However, despite the four formulations studied did not present any acute phototoxicity potential, the combination 2 containing octyl methoxycinnamate (OMC), avobenzone (AVB) and 4-methylbenzilidene camphor (MBC) presented an indication of phototoxicity that should be better investigated in terms of the frequency of photoallergic or chronic phototoxicity in humans, once these tests are scientifically validated only to detect phototoxic potential with the aim of preventing phototoxic reactions in the general population, and positive results cannot predict the exact incidence of phototoxic reactions in humans. Copyright © 2012 Elsevier Ltd. All rights reserved.
Recommendations for the use of medications with continuous enteral nutrition.
Wohlt, Paul D; Zheng, Lan; Gunderson, Shelly; Balzar, Sarah A; Johnson, Benjamin D; Fish, Jeffrey T
2009-08-15
Recommendations for the use of medications with continuous enteral nutrition are provided. A literature review was conducted to identify primary literature reporting medication interactions with continuous enteral nutrition. For medications without supporting literature, manufacturers were contacted for information. Package inserts for specific medications were also investigated for any information to help guide recommendations. If no specific recommendations were made by the pharmaceutical manufacturer or the package insert concerning administration of products with continuous enteral nutrition, a tertiary database was consulted. Recommendations were generated by a consensus of clinicians for those medications that lacked specific recommendations in the primary literature or from the pharmaceutical manufacturer. Documentation of medication interactions with continuous enteral nutrition and food was then collated along with specific recommendations on how to administer the medication with regard to continuous enteral nutrition. Recommendations were classified as strong (grade 1) or weak (grade 2). The quality of evidence was classified as high (grade A), moderate (grade B), or low (grade C). Forty-six medications commonly given to hospitalized patients were evaluated. Twenty-four medications had recommendations based on available data, and the remaining 22 medications had recommendations based on a consensus of clinicians. There was a lack of published data regarding drug-nutrient interactions for a majority of the drugs commonly administered to patients receiving continuous enteral nutrition. Clinicians should recognize potential drug-nutrient interactions and use available evidence to optimize patients' drug therapy.
Environmental Awareness and Public Support for Protecting and Restoring Puget Sound
NASA Astrophysics Data System (ADS)
Safford, Thomas G.; Norman, Karma C.; Henly, Megan; Mills, Katherine E.; Levin, Phillip S.
2014-04-01
In an effort to garner consensus around environmental programs, practitioners have attempted to increase awareness about environmental threats and demonstrate the need for action. Nonetheless, how beliefs about the scope and severity of different types of environmental concerns shape support for management interventions are less clear. Using data from a telephone survey of residents of the Puget Sound region of Washington, we investigate how perceptions of the severity of different coastal environmental problems, along with other social factors, affect attitudes about policy options. We find that self-assessed environmental understanding and views about the seriousness of pollution, habitat loss, and salmon declines are only weakly related. Among survey respondents, women, young people, and those who believe pollution threatens Puget Sound are more likely to support policy measures such as increased enforcement and spending on restoration. Conversely, self-identified Republicans and individuals who view current regulations as ineffective tend to oppose governmental actions aimed at protecting and restoring Puget Sound. Support for one policy measure—tax credits for environmentally-friendly business practices—is not significantly affected by political party affiliation. These findings demonstrate that environmental awareness can influence public support for environmental policy tools. However, the nature of particular management interventions and other social forces can have important mitigating effects and need to be considered by practitioners attempting to develop environment-related social indicators and generate consensus around the need for action to address environmental problems.
Expression of ADP-ribosylation factor (ARF)-like protein 6 during mouse embryonic development.
Takada, Tatsuyuki; Iida, Keiko; Sasaki, Hiroshi; Taira, Masanori; Kimura, Hiroshi
2005-01-01
ADP-ribosylation factor (ARF)-like protein 6 (ARL6) is a member of the ARF-like protein (ARL) subfamily of small GTPases (Moss, 1995; Chavrier, 1999). ARLs are highly conserved through evolution and most of them possess the consensus sequence required for GTP binding and hydrolysis (Pasquallato, 2002). Among ARLs, ARL6 which was initially isolated from a J2E erythroleukemic cell line is divergent in its consensus sequences and its expression has been shown to be limited to the brain and kidney in adult mouse (Ingley, 1999). Recently, it was reported that mutations of the ARL6 gene cause type 3 Bardet-Biedl syndrome in humans and that ARL6 is involved in ciliary transport in C. elegans (Chiang, 2004; Fan, 2004). Here, we investigated the expression pattern of ARL6 during early mouse development by whole-mount in situ hybridization and found that interestingly, ARL6 mRNA was localized around the node at 7.0-7.5 days post coitum (dpc) embryos, while weak expression was also found in the ectoderm. At the later stage (8.5 dpc) ARL6 was expressed in the neural plate and probably in the somites. Based on these results, a possible role of ARL6 in early development is discussed in relation to the findings in human and C. elegans (Chiang, 2004; Fan, 2004).
Conditions for successful data assimilation
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Chorin, A. J.
2013-12-01
Many applications in science and engineering require that the predictions of uncertain models be updated by information from a stream of noisy data. The model and the data jointly define a conditional probability density function (pdf), which contains all the information one has about the process of interest and various numerical methods can be used to study and approximate this pdf, e.g. the Kalman filter, variational methods or particle filters. Given a model and data, each of these algorithms will produce a result. We are interested in the conditions under which this result is reasonable, i.e. consistent with the real-life situation one is modeling. In particular, we show, using idealized models, that numerical data assimilation is feasible in principle only if a suitably defined effective dimension of the problem is not excessive. This effective dimension depends on the noise in the model and the data, and in physically reasonable problems it can be moderate even when the number of variables is huge. In particular, we find that the effective dimension being moderate induces a balance condition between the noises in the model and the data; this balance condition is often satisfied in realistic applications or else the noise levels are excessive and drown the underlying signal. We also study the effects of the effective dimension on particle filters in two instances, one in which the importance function is based on the model alone, and one in which it is based on both the model and the data. We have three main conclusions: (1) the stability (i.e., non-collapse of weights) in particle filtering depends on the effective dimension of the problem. Particle filters can work well if the effective dimension is moderate even if the true dimension is large (which we expect to happen often in practice). (2) A suitable choice of importance function is essential, or else particle filtering fails even when data assimilation is feasible in principle with a sequential algorithm. (3) There is a parameter range in which the model noise and the observation noise are roughly comparable, and in which even the optimal particle filter collapses, even under ideal circumstances. We further study the role of the effective dimension in variational data assimilation and particle smoothing, for both the weak and strong constraint problem. It was found that these methods too require a moderate effective dimension or else no accurate predictions can be expected. Moreover, variational data assimilation or particle smoothing may be applicable in the parameter range where particle filtering fails, because the use of more than one consecutive data set helps reduce the variance which is responsible for the collapse of the filters.
Altmann, Johannes; Rehfeld, Daniel; Träder, Kai; Sperlich, Alexander; Jekel, Martin
2016-04-01
Adsorption onto granular activated carbon (GAC) is an established technology in water and advanced wastewater treatment for the removal of organic substances from the liquid phase. Besides adsorption, the removal of particulate matter by filtration and biodegradation of organic substances in GAC contactors has frequently been reported. The application of GAC as both adsorbent for organic micropollutant (OMP) removal and filter medium for solids retention in tertiary wastewater filtration represents an energy- and space saving option, but has rarely been considered because high dissolved organic carbon (DOC) and suspended solids concentrations in the influent of the GAC adsorber put a significant burden on this integrated treatment step and might result in frequent backwashing and unsatisfactory filtration efficiency. This pilot-scale study investigates the combination of GAC adsorption and deep-bed filtration with coagulation as a single advanced treatment step for simultaneous removal of OMPs and phosphorus from secondary effluent. GAC was assessed as upper filter layer in dual-media downflow filtration and as mono-media upflow filter with regard to filtration performance and OMP removal. Both filtration concepts effectively removed suspended solids and phosphorus, achieving effluent concentrations of 0.1 mg/L TP and 1 mg/L TSS, respectively. Analysis of grain size distribution and head loss within the filter bed showed that considerable head loss occurred in the topmost filter layer in downflow filtration, indicating that most particles do not penetrate deeply into the filter bed. Upflow filtration exhibited substantially lower head loss and effective utilization of the whole filter bed. Well-adsorbing OMPs (e.g. benzotriazole, carbamazepine) were removed by >80% up to throughputs of 8000-10,000 bed volumes (BV), whereas weakly to medium adsorbing OMPs (e.g. primidone, sulfamethoxazole) showed removals <80% at <5,000 BV. In addition, breakthrough behavior was also determined for gabapentin, an anticonvulsant drug recently detected in drinking water resources for which suitable removal technologies are still largely unknown. Gabapentin showed poor adsorptive removal, resulting in rapid concentration increases. Whereas previous studies classified gabapentin as not readily biodegradable, sustained removal was observed after prolonged operation and points at biological elimination of gabapentin within the GAC filter. The application of GAC as filter medium was compared to direct addition of powdered activated carbon (PAC) to deep-bed filtration as a direct process alternative. Both options yielded comparable OMP removals for most compounds at similar carbon usage rates, but GAC achieved considerably higher removals for biodegradable OMPs. Based on the results, the application of GAC in combination with coagulation/filtration represents a promising alternative to powdered activated carbon and ozone for advanced wastewater treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Object Recognition and Localization: The Role of Tactile Sensors
Aggarwal, Achint; Kirchner, Frank
2014-01-01
Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments. PMID:24553087
Multiple UAV Cooperation for Wildfire Monitoring
NASA Astrophysics Data System (ADS)
Lin, Zhongjie
Wildfires have been a major factor in the development and management of the world's forest. An accurate assessment of wildfire status is imperative for fire management. This thesis is dedicated to the topic of utilizing multiple unmanned aerial vehicles (UAVs) to cooperatively monitor a large-scale wildfire. This is achieved through wildfire spreading situation estimation based on on-line measurements and wise cooperation strategy to ensure efficiency. First, based on the understanding of the physical characteristics of the wildfire propagation behavior, a wildfire model and a Kalman filter-based method are proposed to estimate the wildfire rate of spread and the fire front contour profile. With the enormous on-line measurements from on-board sensors of UAVs, the proposed method allows a wildfire monitoring mission to benefit from on-line information updating, increased flexibility, and accurate estimation. An independent wildfire simulator is utilized to verify the effectiveness of the proposed method. Second, based on the filter analysis, wildfire spreading situation and vehicle dynamics, the influence of different cooperation strategies of UAVs to the overall mission performance is studied. The multi-UAV cooperation problem is formulated in a distributed network. A consensus-based method is proposed to help address the problem. The optimal cooperation strategy of UAVs is obtained through mathematical analysis. The derived optimal cooperation strategy is then verified in an independent fire simulation environment to verify its effectiveness.
2009-01-01
Background The aim of the ACE-Obesity study was to determine the economic credentials of interventions which aim to prevent unhealthy weight gain in children and adolescents. We have reported elsewhere on the modelled effectiveness of 13 obesity prevention interventions in children. In this paper, we report on the cost results and associated methods together with the innovative approach to priority setting that underpins the ACE-Obesity study. Methods The Assessing Cost Effectiveness (ACE) approach combines technical rigour with 'due process' to facilitate evidence-based policy analysis. Technical rigour was achieved through use of standardised evaluation methods, a research team that assembles best available evidence and extensive uncertainty analysis. Cost estimates were based on pathway analysis, with resource usage estimated for the interventions and their 'current practice' comparator, as well as associated cost offsets. Due process was achieved through involvement of stakeholders, consensus decisions informed by briefing papers and 2nd stage filter analysis that captures broader factors that influence policy judgements in addition to cost-effectiveness results. The 2nd stage filters agreed by stakeholders were 'equity', 'strength of the evidence', 'feasibility of implementation', 'acceptability to stakeholders', 'sustainability' and 'potential for side-effects'. Results The intervention costs varied considerably, both in absolute terms (from cost saving [6 interventions] to in excess of AUD50m per annum) and when expressed as a 'cost per child' estimate (from
Newborn hearing screening programme in Belgium: a consensus recommendation on risk factors.
Vos, Bénédicte; Senterre, Christelle; Lagasse, Raphaël; Levêque, Alain
2015-10-16
Understanding the risk factors for hearing loss is essential for designing the Belgian newborn hearing screening programme. Accordingly, they needed to be updated in accordance with current scientific knowledge. This study aimed to update the recommendations for the clinical management and follow-up of newborns with neonatal risk factors of hearing loss for the newborn screening programme in Belgium. A literature review was performed, and the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) system assessment method was used to determine the level of evidence quality and strength of the recommendation for each risk factor. The state of scientific knowledge, levels of evidence quality, and graded recommendations were subsequently assessed using a three-round Delphi consensus process (two online questionnaires and one face-to-face meeting). Congenital infections (i.e., cytomegalovirus, toxoplasmosis, and syphilis), a family history of hearing loss, consanguinity in (grand)parents, malformation syndromes, and foetal alcohol syndrome presented a 'high' level of evidence quality as neonatal risk factors for hearing loss. Because of the sensitivity of auditory function to bilirubin toxicity, hyperbilirubinaemia was assessed at a 'moderate' level of evidence quality. In contrast, a very low birth weight, low Apgar score, and hospitalisation in the neonatal intensive care unit ranged from 'very low' to 'low' levels, and ototoxic drugs were evidenced as 'very low'. Possible explanations for these 'very low' and 'low' levels include the improved management of these health conditions or treatments, and methodological weaknesses such as confounding effects, which make it difficult to conclude on individual risk factors. In the recommendation statements, the experts emphasised avoiding unidentified neonatal hearing loss and opted to include risk factors for hearing loss even in cases with weak evidence. The panel also highlighted the cumulative effect of risk factors for hearing loss. We revised the recommendations for the clinical management and follow-up of newborns exhibiting neonatal risk factors for hearing loss on the basis of the aforementioned evidence-based approach and clinical experience from experts. The next step is the implementation of these findings in the Belgian screening programme.
Recommendations for Selecting Drug-Drug Interactions for Clinical Decision Support
Tilson, Hugh; Hines, Lisa E.; McEvoy, Gerald; Weinstein, David M.; Hansten, Philip D.; Matuszewski, Karl; le Comte, Marianne; Higby-Baker, Stefanie; Hanlon, Joseph T.; Pezzullo, Lynn; Vieson, Kathleen; Helwig, Amy L.; Huang, Shiew-Mei; Perre, Anthony; Bates, David W.; Poikonen, John; Wittie, Michael A.; Grizzle, Amy J.; Brown, Mary; Malone, Daniel C.
2016-01-01
Purpose To recommend principles for including drug-drug interactions (DDIs) in clinical decision support. Methods A conference series was conducted to improve clinical decision support (CDS) for DDIs. The Content Workgroup met monthly by webinar from January 2013 to February 2014, with two in-person meetings to reach consensus. The workgroup consisted of 20 experts in pharmacology, drug information, and CDS from academia, government agencies, health information (IT) vendors, and healthcare organizations. Workgroup members addressed four key questions: (1) What process should be used to develop and maintain a standard set of DDIs?; (2) What information should be included in a knowledgebase of standard DDIs?; (3) Can/should a list of contraindicated drug pairs be established?; and (4) How can DDI alerts be more intelligently filtered? Results To develop and maintain a standard set of DDIs for CDS in the United States, we recommend a transparent, systematic, and evidence-driven process with graded recommendations by a consensus panel of experts and oversight by a national organization. We outline key DDI information needed to help guide clinician decision-making. We recommend judicious classification of DDIs as contraindicated, as only a small set of drug combinations are truly contraindicated. Finally, we recommend more research to identify methods to safely reduce repetitive and less relevant alerts. Conclusion A systematic ongoing process is necessary to select DDIs for alerting clinicians. We anticipate that our recommendations can lead to consistent and clinically relevant content for interruptive DDIs, and thus reduce alert fatigue and improve patient safety. PMID:27045070
Spatial effects in real networks: Measures, null models, and applications
NASA Astrophysics Data System (ADS)
Ruzzenenti, Franco; Picciolo, Francesco; Basosi, Riccardo; Garlaschelli, Diego
2012-12-01
Spatially embedded networks are shaped by a combination of purely topological (space-independent) and space-dependent formation rules. While it is quite easy to artificially generate networks where the relative importance of these two factors can be varied arbitrarily, it is much more difficult to disentangle these two architectural effects in real networks. Here we propose a solution to this problem, by introducing global and local measures of spatial effects that, through a comparison with adequate null models, effectively filter out the spurious contribution of nonspatial constraints. Our filtering allows us to consistently compare different embedded networks or different historical snapshots of the same network. As a challenging application we analyze the World Trade Web, whose topology is known to depend on geographic distances but is also strongly determined by nonspatial constraints (degree sequence or gross domestic product). Remarkably, we are able to detect weak but significant spatial effects both locally and globally in the network, showing that our method succeeds in retrieving spatial information even when nonspatial factors dominate. We finally relate our results to the economic literature on gravity models and trade globalization.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
NASA Astrophysics Data System (ADS)
Gaevaya, E. V.; Bogaychuk, Y. E.; Tarasova, S. S.; Skipin, L. N.; Zaharova, E. V.
2017-10-01
The article considers the results of studies of the chemical and granulometric content and the factor of bore mud filtration in the process of its utilization. When the phosphogypsum is added, hydrophysical properties of the bore mud improve. At the same time, gradation of soil from the water-proof to weakly permeable takes place. This phenomenon is connected with recovery of filterability at the expense of ion-exchange reaction and a decrease of the silt fraction content in the bore mud. During the adding of phosphogypsum in the bore mud, pH decreased and made up 7.6-7.8 U. The decrease of the concentration of chloride-ions and sulphate-ions took place at the expense of replacement of Na+ by cations of Ca2+ that contributed to the formation of the water-stable structure with good filterability. The content of total forms of heavy metals in man-induced soil was lower than MAC (APC) for the loams. Man-induced soil has a V class of danger for the surrounding environment.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Solid State Spin-Wave Quantum Memory for Time-Bin Qubits.
Gündoğan, Mustafa; Ledingham, Patrick M; Kutluer, Kutlu; Mazzera, Margherita; de Riedmatten, Hugues
2015-06-12
We demonstrate the first solid-state spin-wave optical quantum memory with on-demand read-out. Using the full atomic frequency comb scheme in a Pr(3+):Y2SiO5 crystal, we store weak coherent pulses at the single-photon level with a signal-to-noise ratio >10. Narrow-band spectral filtering based on spectral hole burning in a second Pr(3+):Y2SiO5 crystal is used to filter out the excess noise created by control pulses to reach an unconditional noise level of (2.0±0.3)×10(-3) photons per pulse. We also report spin-wave storage of photonic time-bin qubits with conditional fidelities higher than achievable by a measure and prepare strategy, demonstrating that the spin-wave memory operates in the quantum regime. This makes our device the first demonstration of a quantum memory for time-bin qubits, with on-demand read-out of the stored quantum information. These results represent an important step for the use of solid-state quantum memories in scalable quantum networks.
An Analysis of The Parameters Used In Speech ABR Assessment Protocols.
Sanfins, Milaine D; Hatzopoulos, Stavros; Donadon, Caroline; Diniz, Thais A; Borges, Leticia R; Skarzynski, Piotr H; Colella-Santos, Maria Francisca
2018-04-01
The aim of this study was to assess the parameters of choice, such as duration, intensity, rate, polarity, number of sweeps, window length, stimulated ear, fundamental frequency, first formant, and second formant, from previously published speech ABR studies. To identify candidate articles, five databases were assessed using the following keyword descriptors: speech ABR, ABR-speech, speech auditory brainstem response, auditory evoked potential to speech, speech-evoked brainstem response, and complex sounds. The search identified 1288 articles published between 2005 and 2015. After filtering the total number of papers according to the inclusion and exclusion criteria, 21 studies were selected. Analyzing the protocol details used in 21 studies suggested that there is no consensus to date on a speech-ABR protocol and that the parameters of analysis used are quite variable between studies. This inhibits the wider generalization and extrapolation of data across languages and studies.
Hou, Sukuan
2015-01-16
The Linxia Basin, Gansu Province, China, is known for its abundant and well preserved fossils. Here a new species, Euprox grandis sp. nov., is established based on a skull and antlers collected from the upper Miocene Liushu Formation of the Linxia Basin. The new species is distinguishable from other Euprox species by its large body size, notably long pedicle and weak burr. The main beam and the brow tine are slightly curved both medially and backwards, and the apex of the main beam turns, curving slightly laterally. The upper cheek teeth are brachydont, with a clear central fold on the premolars and internal postprotocrista and metaconule fold on M1-M2. The cingulum is almost absent, only occasionally weakly developed at the anterior and lingual surface of the teeth. Cladistic analysis was carried out using the TNT software, and two most parsimonious trees were retained. As the strict consensus tree shows E. grandis appears to be an advanced muntiacine form, which may have a close relationship with the genus Muntiacus. The presence of E. grandis in the Linxia Basin adds new evidence to support a warm and humid environment during the late Miocene in the basin.
Iwasaki, H; Shiba, T; Makino, K; Nakata, A; Shinagawa, H
1989-01-01
The ruvA and ruvB genes of Escherichia coli constitute an operon which belongs to the SOS regulon. Genetic evidence suggests that the products of the ruv operon are involved in DNA repair and recombination. To begin biochemical characterization of these proteins, we developed a plasmid system that overproduced RuvB protein to 20% of total cell protein. Starting from the overproducing system, we purified RuvB protein. The purified RuvB protein behaved like a monomer in gel filtration chromatography and had an apparent relative molecular mass of 38 kilodaltons in sodium dodecyl sulfate-polyacrylamide gel electrophoresis, which agrees with the value predicted from the DNA sequence. The amino acid sequence of the amino-terminal region of the purified protein was analyzed, and the sequence agreed with the one deduced from the DNA sequence. Since the deduced sequence of RuvB protein contained the consensus sequence for ATP-binding proteins, we examined the ATP-binding and ATPase activities of the purified RuvB protein. RuvB protein had a stronger affinity to ADP than to ATP and weak ATPase activity. The results suggest that the weak ATPase activity of RuvB protein is at least partly due to end product inhibition by ADP. Images PMID:2529252
Unsupervised iterative detection of land mines in highly cluttered environments.
Batman, Sinan; Goutsias, John
2003-01-01
An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.
Noise reduction in negative-ion quadrupole mass spectrometry
Chastagner, P.
1993-04-20
A quadrupole mass spectrometer (QMS) system is described having an ion source, quadrupole mass filter, and ion collector/recorder system. A weak, transverse magnetic field and an electron collector are disposed between the quadrupole and ion collector. When operated in negative ion mode, the ion source produces a beam of primarily negatively-charged particles from a sample, including electrons as well as ions. The beam passes through the quadrupole and enters the magnetic field, where the electrons are deflected away from the beam path to the electron collector. The negative ions pass undeflected to the ion collector where they are detected and recorded as a mass spectrum.
Noise reduction in negative-ion quadrupole mass spectrometry
Chastagner, Philippe
1993-01-01
A quadrupole mass spectrometer (QMS) system having an ion source, quadrupole mass filter, and ion collector/recorder system. A weak, transverse magnetic field and an electron collector are disposed between the quadrupole and ion collector. When operated in negative ion mode, the ion source produces a beam of primarily negatively-charged particles from a sample, including electrons as well as ions. The beam passes through the quadrupole and enters the magnetic field, where the electrons are deflected away from the beam path to the electron collector. The negative ions pass undeflected to the ion collector where they are detected and recorded as a mass spectrum.
Bandwidth-induced reversal of asymmetry in optical-double-resonance amplitudes
NASA Astrophysics Data System (ADS)
Nitz, D. E.; Smith, A. V.; Levenson, M. D.; Smith, S. J.
1981-07-01
Optical-double-resonance measurements using ionization detection have been carried out in the 3S12-3P12-4D atomic-sodium system. Asymmetries observed in production of 4D atoms from the two components of the Stark-split 3P12 state are found to be controlled by the far, very weak wings of the 17-MHz full-width-at-half-maximum laser line which is used to drive the 3S12-3P12 transition at detunings in the range 0-70 GHz. Suppression of the wings with a Fabry-Perot filter causes a pronounced reversal of the asymmetry.
Breivik, H; Bang, U; Jalonen, J; Vigfússon, G; Alahuhta, S; Lagerkranser, M
2010-01-01
Central neuraxial blocks (CNBs) for surgery and analgesia are an important part of anaesthesia practice in the Nordic countries. More active thromboprophylaxis with potent antihaemostatic drugs has increased the risk of bleeding into the spinal canal. National guidelines for minimizing this risk in patients who benefit from such blocks vary in their recommendations for safe practice. The Scandinavian Society of Anaesthesiology and Intensive Care Medicine (SSAI) appointed a task force of experts to establish a Nordic consensus on recommendations for best clinical practice in providing effective and safe CNBs in patients with an increased risk of bleeding. We performed a literature search and expert evaluation of evidence for (1) the possible benefits of CNBs on the outcome of anaesthesia and surgery, for (2) risks of spinal bleeding from hereditary and acquired bleeding disorders and antihaemostatic drugs used in surgical patients for thromboprophylaxis, for (3) risk evaluation in published case reports, and for (4) recommendations in published national guidelines. Proposals from the taskforce were available for feedback on the SSAI web-page during the summer of 2008. Neuraxial blocks can improve comfort and reduce morbidity (strong evidence) and mortality (moderate evidence) after surgical procedures. Haemostatic disorders, antihaemostatic drugs, anatomical abnormalities of the spine and spinal blood vessels, elderly patients, and renal and hepatic impairment are risk factors for spinal bleeding (strong evidence). Published national guidelines are mainly based on experts' opinions (weak evidence). The task force reached a consensus on Nordic guidelines, mainly based on our experts' opinions, but we acknowledge different practices in heparinization during vascular surgery and peri-operative administration of non-steroidal anti-inflammatory drugs during neuraxial blocks. Experts from the five Nordic countries offer consensus recommendations for safe clinical practice of neuraxial blocks and how to minimize the risks of serious complications from spinal bleeding. A brief version of the recommendations is available on http://www.ssai.info.
Polychromatic spectral pattern analysis of ultra-weak photon emissions from a human body.
Kobayashi, Masaki; Iwasa, Torai; Tada, Mika
2016-06-01
Ultra-weak photon emission (UPE), often designated as biophoton emission, is generally observed in a wide range of living organisms, including human beings. This phenomenon is closely associated with reactive oxygen species (ROS) generated during normal metabolic processes and pathological states induced by oxidative stress. Application of UPE extracting the pathophysiological information has long been anticipated because of its potential non-invasiveness, facilitating its diagnostic use. Nevertheless, its weak intensity and UPE mechanism complexity hinder its use for practical applications. Spectroscopy is crucially important for UPE analysis. However, filter-type spectroscopy technique, used as a conventional method for UPE analysis, intrinsically limits its performance because of its monochromatic scheme. To overcome the shortcomings of conventional methods, the authors developed a polychromatic spectroscopy system for UPE spectral pattern analysis. It is based on a highly efficient lens systems and a transmission-type diffraction grating with a highly sensitive, cooled, charge-coupled-device (CCD) camera. Spectral pattern analysis of the human body was done for a fingertip using the developed system. The UPE spectrum covers the spectral range of 450-750nm, with a dominant emission region of 570-670nm. The primary peak is located in the 600-650nm region. Furthermore, application of UPE source exploration was demonstrated with the chemiluminescence spectrum of melanin and coexistence with oxidized linoleic acid. Copyright © 2016 Elsevier B.V. All rights reserved.
Selection biases in empirical p(z) methods for weak lensing
Gruen, D.; Brimioulle, F.
2017-02-23
To measure the mass of foreground objects with weak gravitational lensing, one needs to estimate the redshift distribution of lensed background sources. This is commonly done in an empirical fashion, i.e. with a reference sample of galaxies of known spectroscopic redshift, matched to the source population. In this paper, we develop a simple decision tree framework that, under the ideal conditions of a large, purely magnitude-limited reference sample, allows an unbiased recovery of the source redshift probability density function p(z), as a function of magnitude and colour. We use this framework to quantify biases in empirically estimated p(z) caused bymore » selection effects present in realistic reference and weak lensing source catalogues, namely (1) complex selection of reference objects by the targeting strategy and success rate of existing spectroscopic surveys and (2) selection of background sources by the success of object detection and shape measurement at low signal to noise. For intermediate-to-high redshift clusters, and for depths and filter combinations appropriate for ongoing lensing surveys, we find that (1) spectroscopic selection can cause biases above the 10 per cent level, which can be reduced to ≈5 per cent by optimal lensing weighting, while (2) selection effects in the shape catalogue bias mass estimates at or below the 2 per cent level. Finally, this illustrates the importance of completeness of the reference catalogues for empirical redshift estimation.« less
Hayashi, Paul H
2016-02-04
Hepatotoxicity due to drugs, herbal or dietary supplements remains largely a clinical diagnosis based on meticulous history taking and exclusion of other causes of liver injury. In 2004, the U.S. Drug-Induced Liver Injury Network (DILIN) was created under the auspices of the U.S. National Institute of Diabetes and Digestive and Kidney Diseases with the aims of establishing a large registry of cases for clinical, epidemiological and mechanistic study. From inception, the DILIN has used an expert opinion process that incorporates consensus amongst three different DILIN hepatologists assigned to each case. It is the most well-established, well-described and vigorous expert opinion process for DILI to date, and yet it is an imperfect standard. This review will discuss the DILIN expert opinion process, its strengths and weaknesses, psychometric performance and future.
AlHeresh, Rawan; LaValley, Michael P; Coster, Wendy; Keysor, Julie J
2017-06-01
To evaluate construct validity and scoring methods of the world health organization-health and work performance questionnaire (HPQ) for people with arthritis. Construct validity was examined through hypothesis testing using the recommended guidelines of the consensus-based standards for the selection of health measurement instruments (COSMIN). The HPQ using the absolute scoring method showed moderate construct validity as four of the seven hypotheses were met. The HPQ using the relative scoring method had weak construct validity as only one of the seven hypotheses were met. The absolute scoring method for the HPQ is superior in construct validity to the relative scoring method in assessing work performance among people with arthritis and related rheumatic conditions; however, more research is needed to further explore other psychometric properties of the HPQ.
Real world ocean rogue waves explained without the modulational instability.
Fedele, Francesco; Brennan, Joseph; Ponce de León, Sonia; Dudley, John; Dias, Frédéric
2016-06-21
Since the 1990s, the modulational instability has commonly been used to explain the occurrence of rogue waves that appear from nowhere in the open ocean. However, the importance of this instability in the context of ocean waves is not well established. This mechanism has been successfully studied in laboratory experiments and in mathematical studies, but there is no consensus on what actually takes place in the ocean. In this work, we question the oceanic relevance of this paradigm. In particular, we analyze several sets of field data in various European locations with various tools, and find that the main generation mechanism for rogue waves is the constructive interference of elementary waves enhanced by second-order bound nonlinearities and not the modulational instability. This implies that rogue waves are likely to be rare occurrences of weakly nonlinear random seas.
Real world ocean rogue waves explained without the modulational instability
Fedele, Francesco; Brennan, Joseph; Ponce de León, Sonia; Dudley, John; Dias, Frédéric
2016-01-01
Since the 1990s, the modulational instability has commonly been used to explain the occurrence of rogue waves that appear from nowhere in the open ocean. However, the importance of this instability in the context of ocean waves is not well established. This mechanism has been successfully studied in laboratory experiments and in mathematical studies, but there is no consensus on what actually takes place in the ocean. In this work, we question the oceanic relevance of this paradigm. In particular, we analyze several sets of field data in various European locations with various tools, and find that the main generation mechanism for rogue waves is the constructive interference of elementary waves enhanced by second-order bound nonlinearities and not the modulational instability. This implies that rogue waves are likely to be rare occurrences of weakly nonlinear random seas. PMID:27323897
[eHealth in Peru: implementation of policies to strengthen health information systems].
Curioso, Walter H
2014-01-01
Health information systems play a key role in enabling high quality, complete health information to be available in a timely fashion for operational and strategic decision-making that makes it possible to save lives and improve the health and quality of life of the population. In many countries, health information systems are weak, incomplete, and fragmented. However, there is broad consensus in the literature of the need to strengthen health information systems in countries around the world. The objective of this paper is to present the essential components of the conceptual framework to strengthen health information systems in Peru. It describes the principal actions and strategies of the Ministry of Health of Peru during the process of strengthening health information systems. These systems make it possible to orient policies for appropriate decision-making in public health.
Udoh, IA; Mantell, JE; Sandfort, T; Eighmy, MA
2010-01-01
HIV prevalence in the Niger Delta of Nigeria is generally attributed to concurrent sexual partnerships and weak public sector health care and education systems. This paper examines the likelihood of additional factors, such as the intersection of widespread poverty, migration, and sex work, as contributory channels of HIV transmission in the region. To explore this issue, we conducted a Delphi survey with 27 experts to formulate consensus about the impact of poverty, migration, and commercial sex on AIDS in the Niger Delta. Results suggest that these factors and others have exacerbated the epidemic in the region. To stop the further spread of HIV in the region, efforts to address poverty, sex work, and multiple sexual partnerships require building a public-private partnership which involves participatory action strategies among key stakeholders. PMID:19444664
NASA Astrophysics Data System (ADS)
Tihhonova, O.; Courbin, F.; Harvey, D.; Hilbert, S.; Rusu, C. E.; Fassnacht, C. D.; Bonvin, V.; Marshall, P. J.; Meylan, G.; Sluse, D.; Suyu, S. H.; Treu, T.; Wong, K. C.
2018-07-01
We present a weak gravitational lensing measurement of the external convergence along the line of sight to the quadruply lensed quasar HE 0435-1223. Using deep r-band images from Subaru Suprime Cam, we observe galaxies down to a 3σ limiting magnitude of ˜26 mag resulting in a source galaxy density of 14 galaxies per square arcminute after redshift-based cuts. Using an inpainting technique and multiscale entropy filtering algorithm, we find that the region in close proximity to the lens has an estimated external convergence of κ =-0.012^{+0.020}_{-0.013} and is hence marginally underdense. We also rule out the presence of any halo with a mass greater than Mvir = 1.6 × 1014h-1M⊙ (68 per cent confidence limit). Our results, consistent with previous studies of this lens, confirm that the intervening mass along the line of sight to HE 0435-1223 does not affect significantly the cosmological results inferred from the time-delay measurements of that specific object.
NASA Astrophysics Data System (ADS)
Zhou, Lei; Li, Zhengying; Xiang, Na; Bao, Xiaoyi
2018-06-01
A high speed quasi-distributed demodulation method based on the microwave photonics and the chromatic dispersion effect is designed and implemented for weak fiber Bragg gratings (FBGs). Due to the effect of dispersion compensation fiber (DCF), FBG wavelength shift leads to the change of the difference frequency signal at the mixer. With the way of crossing microwave sweep cycle, all wavelengths of cascade FBGs can be high speed obtained by measuring the frequencies change. Moreover, through the introduction of Chirp-Z and Hanning window algorithm, the analysis of difference frequency signal is achieved very well. By adopting the single-peak filter as a reference, the length disturbance of DCF caused by temperature can be also eliminated. Therefore, the accuracy of this novel method is greatly improved, and high speed demodulation of FBGs can easily realize. The feasibility and performance are experimentally demonstrated using 105 FBGs with 0.1% reflectivity, 1 m spatial interval. Results show that each grating can be distinguished well, and the demodulation rate is as high as 40 kHz, the accuracy is about 8 pm.
NASA Astrophysics Data System (ADS)
Tihhonova, O.; Courbin, F.; Harvey, D.; Hilbert, S.; Rusu, C. E.; Fassnacht, C. D.; Bonvin, V.; Marshall, P. J.; Meylan, G.; Sluse, D.; Suyu, S. H.; Treu, T.; Wong, K. C.
2018-04-01
We present a weak gravitational lensing measurement of the external convergence along the line of sight to the quadruply lensed quasar HE 0435-1223. Using deep r-band images from Subaru-Suprime-Cam we observe galaxies down to a 3σ limiting magnitude of ˜26 mags resulting in a source galaxy density of 14 galaxies / arcmin2 after redshift-based cuts. Using an inpainting technique and Multi-Scale Entropy filtering algorithm, we find that the region in close proximity to the lens has an estimated external convergence of κ =-0.012^{+0.020}_{-0.013} and is hence marginally under-dense. We also rule out the presence of any halo with a mass greater than Mvir = 1.6 × 1014h-1M⊙ (68% confidence limit). Our results, consistent with previous studies of this lens, confirm that the intervening mass along the line of sight to HE 0435-1223 does not affect significantly the cosmological results inferred from the time delay measurements of that specific object.
Line Assignments and Position Measurements in Several Weak CO2 Bands between 4590 /cm and 7930/ cm
NASA Technical Reports Server (NTRS)
Giver, L. P.; Kshirsagar, R. J.; Freedman, R. C.; Chackerian, C.; Wattson, R. B.
1998-01-01
A substantial set of CO2 spectra from 4500 to 12000 /cm has been obtained at Ames with 1500 m path length using a Bomem DA8 FTS. The signal/noise was improved compared to prior spectra obtained in this laboratory by including a filter wheel limiting the band-pass of each spectrum to several hundred/cm. We have measured positions of lines in several weak bands not previously resolved in laboratory spectra. Using our positions and assignments of lines of the Q branch of the 31103-00001 vibrational band at 4591/cm, we have re-determined the rotational constants for the 31103f levels. Q-branch lines of this band were previously observed, but misassigned, in Venus spectra by Mandin. The current HITRAN values of the rotational constants for this level are incorrect due to the Q-branch misassignments. Our prior measurements of the 21122-00001 vibrational band at 7901/cm were limited to Q- and R-branch lines; with the improved signal/noise of these new spectra we have now measured lines in the weaker P branch.
Multispectral open-air intraoperative fluorescence imaging.
Behrooz, Ali; Waterman, Peter; Vasquez, Kristine O; Meganck, Jeff; Peterson, Jeffrey D; Faqir, Ilias; Kempner, Joshua
2017-08-01
Intraoperative fluorescence imaging informs decisions regarding surgical margins by detecting and localizing signals from fluorescent reporters, labeling targets such as malignant tissues. This guidance reduces the likelihood of undetected malignant tissue remaining after resection, eliminating the need for additional treatment or surgery. The primary challenges in performing open-air intraoperative fluorescence imaging come from the weak intensity of the fluorescence signal in the presence of strong surgical and ambient illumination, and the auto-fluorescence of non-target components, such as tissue, especially in the visible spectral window (400-650 nm). In this work, a multispectral open-air fluorescence imaging system is presented for translational image-guided intraoperative applications, which overcomes these challenges. The system is capable of imaging weak fluorescence signals with nanomolar sensitivity in the presence of surgical illumination. This is done using synchronized fluorescence excitation and image acquisition with real-time background subtraction. Additionally, the system uses a liquid crystal tunable filter for acquisition of multispectral images that are used to spectrally unmix target fluorescence from non-target auto-fluorescence. Results are validated by preclinical studies on murine models and translational canine oncology models.
Utilization of drilling cuttings with extraction of ground for recultivation of disturbed soils
NASA Astrophysics Data System (ADS)
Gaevaya, E. V.; Bogaychuk, Y. E.; Tarasova, S. S.; Skipin, L. N.; Zakharova, E. V.
2017-10-01
Drilling of wells is connected with formation of the bore mud represented by the drill cutting with waste drilling mud. Bore mud has negative physical-chemical, physical and chemical properties: high content of salts, increased alkalinity, ash structure, soil overcrust, low airing, weak filterability and so on. In case of phosphogypsum adding to the bore mud, pH level decreases from the alkaline (10.5 U) to weakly alkaline 7.6 U, decrease of pH is connected with the influence of phosphogypsum acidity and neutralization of the more mud. Concentration of chloride ions and sulphate ions in reclaimed bore mud was 70±7 and 456±46 correspondingly. Presence of oil products in received soil was 198.0-219.0 mg/kg. When adding phosphogypsum, sand, sorbent and humic formulation «Rostok» to the bore mud, it has shown good germinating ability of cultures- phytomeliorants (93,3%). 100% germinating ability was observed in meadow grass with a height of overground sprouts 10.7 cm, germinating ability of red fescue was 80% with height of overground sprouts 9.6 cm.
Research and Implementation of Heart Sound Denoising
NASA Astrophysics Data System (ADS)
Liu, Feng; Wang, Yutai; Wang, Yanxiang
Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.
Estimating air chemical emissions from research activities using stack measurement data.
Ballinger, Marcel Y; Duchsherer, Cheryl J; Woodruff, Rodger K; Larson, Timothy V
2013-03-01
Current methods of estimating air emissions from research and development (R&D) activities use a wide range of release fractions or emission factors with bases ranging from empirical to semi-empirical. Although considered conservative, the uncertainties and confidence levels of the existing methods have not been reported. Chemical emissions were estimated from sampling data taken from four research facilities over 10 years. The approach was to use a Monte Carlo technique to create distributions of annual emission estimates for target compounds detected in source test samples. Distributions were created for each year and building sampled for compounds with sufficient detection frequency to qualify for the analysis. The results using the Monte Carlo technique without applying a filter to remove negative emission values showed almost all distributions spanning zero, and 40% of the distributions having a negative mean. This indicates that emissions are so low as to be indistinguishable from building background. Application of a filter to allow only positive values in the distribution provided a more realistic value for emissions and increased the distribution mean by an average of 16%. Release fractions were calculated by dividing the emission estimates by a building chemical inventory quantity. Two variations were used for this quantity: chemical usage, and chemical usage plus one-half standing inventory. Filters were applied so that only release fraction values from zero to one were included in the resulting distributions. Release fractions had a wide range among chemicals and among data sets for different buildings and/or years for a given chemical. Regressions of release fractions to molecular weight and vapor pressure showed weak correlations. Similarly, regressions of mean emissions to chemical usage, chemical inventory, molecular weight, and vapor pressure also gave weak correlations. These results highlight the difficulties in estimating emissions from R&D facilities using chemical inventory data. Air emissions from research operations are difficult to estimate because of the changing nature of research processes and the small quantity and wide variety of chemicals used. Analysis of stack measurements taken over multiple facilities and a 10-year period using a Monte Carlo technique provided a method to quantify the low emissions and to estimate release fractions based on chemical inventories. The variation in release fractions did not correlate well with factors investigated, confirming the complexities in estimating R&D emissions.
Baker, Christa A.; Ma, Lisa; Casareale, Chelsea R.
2016-01-01
In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8–12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. SIGNIFICANCE STATEMENT The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. PMID:27559179
Baker, Christa A; Ma, Lisa; Casareale, Chelsea R; Carlson, Bruce A
2016-08-24
In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8-12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. Copyright © 2016 the authors 0270-6474/16/368985-16$15.00/0.
Bioenergy Potential Based on Vinasse From Ethanol Industrial Waste to Green Energy Sustainability
NASA Astrophysics Data System (ADS)
Harihastuti, Nani; Marlena, Bekti
2018-02-01
The waste water from alcohol industry is called vinasse has a high organic content, with BOD5 = 109.038 mg / l, COD = 353.797 mg / l and TSS = 7200 mg / l, pH 4-5 with a temperature of around 40-50ºC. The current treatment of alcohol waste water, most still using facultative anaerobic technology with open ponds that are only covered with HDPE plastics. This technology produces less optimal biogas and has a weakness that is the hydraulic residence time (HRT) for long (40-50 days), wide land needs, low COD reduction efficiency as well as high risk of fire and leakage of biogas release high to trigger the occurrence of greenhouse gas and global warming effects. Development of technology with innovation reactor integration model Fixed Dome-Hybrid Anaerobic Filter aims to expand the contact area between the substrate and microbial with modification of the substrate flow system and the area of the filter and integrate with the gas accumulator. The design of this Fixed Dome-Hybrid Anaerobic filter integration model technology, has the advantage of producing optimal bioenergy with CH4 more than 50% content with decrease of COD more than 85% and hydraulic residence time of about 10 (ten) days, bioenergy result is renewable energy made from raw material vinasse from alcohol industrial waste which can be utilized for fuel substitution on the distillation process or boiler process of the industry in a sustainable and cleaner environment.
Temporal properties of responses to sound in the ventral nucleus of the lateral lemniscus.
Recio-Spinoso, Alberto; Joris, Philip X
2014-02-01
Besides the rapid fluctuations in pressure that constitute the "fine structure" of a sound stimulus, slower fluctuations in the sound's envelope represent an important temporal feature. At various stages in the auditory system, neurons exhibit tuning to envelope frequency and have been described as modulation filters. We examine such tuning in the ventral nucleus of the lateral lemniscus (VNLL) of the pentobarbital-anesthetized cat. The VNLL is a large but poorly accessible auditory structure that provides a massive inhibitory input to the inferior colliculus. We test whether envelope filtering effectively applies to the envelope spectrum when multiple envelope components are simultaneously present. We find two broad classes of response with often complementary properties. The firing rate of onset neurons is tuned to a band of modulation frequencies, over which they also synchronize strongly to the envelope waveform. Although most sustained neurons show little firing rate dependence on modulation frequency, some of them are weakly tuned. The latter neurons are usually band-pass or low-pass tuned in synchronization, and a reverse-correlation approach demonstrates that their modulation tuning is preserved to nonperiodic, noisy envelope modulations of a tonal carrier. Modulation tuning to this type of stimulus is weaker for onset neurons. In response to broadband noise, sustained and onset neurons tend to filter out envelope components over a frequency range consistent with their modulation tuning to periodically modulated tones. The results support a role for VNLL in providing temporal reference signals to the auditory midbrain.
NASA Astrophysics Data System (ADS)
Cornejo, Juan Carlos
The Standard Model has been a theory with the greatest success in describing the fundamental interactions of particles. As of the writing of this dissertation, the Standard Model has not been shown to make a false prediction. However, the limitations of the Standard Model have long been suspected by its lack of a description of gravity, nor dark matter. Its largest challenge to date, has been the observation of neutrino oscillations, and the implication that they may not be massless, as required by the Standard Model. The growing consensus is that the Standard Model is simply a lower energy effective field theory, and that new physics lies at much higher energies. The Qweak Experiment is testing the Electroweak theory of the Standard Model by making a precise determination of the weak charge of the proton (Qpw). Any signs of "new physics" will appear as a deviation to the Standard Model prediction. The weak charge is determined via a precise measurement of the parity-violating asymmetry of the electron-proton interaction via elastic scattering of a longitudinally polarized electron beam of an un-polarized proton target. The experiment required that the electron beam polarization be measured to an absolute uncertainty of 1 %. At this level the electron beam polarization was projected to contribute the single largest experimental uncertainty to the parity-violating asymmetry measurement. This dissertation will detail the use of Compton scattering to determine the electron beam polarization via the detection of the scattered photon. I will conclude the remainder of the dissertation with an independent analysis of the blinded Qweak.
Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016.
Rhodes, Andrew; Evans, Laura E; Alhazzani, Waleed; Levy, Mitchell M; Antonelli, Massimo; Ferrer, Ricard; Kumar, Anand; Sevransky, Jonathan E; Sprung, Charles L; Nunnally, Mark E; Rochwerg, Bram; Rubenfeld, Gordon D; Angus, Derek C; Annane, Djillali; Beale, Richard J; Bellinghan, Geoffrey J; Bernard, Gordon R; Chiche, Jean-Daniel; Coopersmith, Craig; De Backer, Daniel P; French, Craig J; Fujishima, Seitaro; Gerlach, Herwig; Hidalgo, Jorge Luis; Hollenberg, Steven M; Jones, Alan E; Karnad, Dilip R; Kleinpell, Ruth M; Koh, Younsuk; Lisboa, Thiago Costa; Machado, Flavia R; Marini, John J; Marshall, John C; Mazuski, John E; McIntyre, Lauralyn A; McLean, Anthony S; Mehta, Sangeeta; Moreno, Rui P; Myburgh, John; Navalesi, Paolo; Nishida, Osamu; Osborn, Tiffany M; Perner, Anders; Plunkett, Colleen M; Ranieri, Marco; Schorr, Christa A; Seckel, Maureen A; Seymour, Christopher W; Shieh, Lisa; Shukri, Khalid A; Simpson, Steven Q; Singer, Mervyn; Thompson, B Taylor; Townsend, Sean R; Van der Poll, Thomas; Vincent, Jean-Louis; Wiersinga, W Joost; Zimmerman, Janice L; Dellinger, R Phillip
2017-03-01
To provide an update to "Surviving Sepsis Campaign Guidelines for Management of Sepsis and Septic Shock: 2012". A consensus committee of 55 international experts representing 25 international organizations was convened. Nominal groups were assembled at key international meetings (for those committee members attending the conference). A formal conflict-of-interest (COI) policy was developed at the onset of the process and enforced throughout. A stand-alone meeting was held for all panel members in December 2015. Teleconferences and electronic-based discussion among subgroups and among the entire committee served as an integral part of the development. The panel consisted of five sections: hemodynamics, infection, adjunctive therapies, metabolic, and ventilation. Population, intervention, comparison, and outcomes (PICO) questions were reviewed and updated as needed, and evidence profiles were generated. Each subgroup generated a list of questions, searched for best available evidence, and then followed the principles of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system to assess the quality of evidence from high to very low, and to formulate recommendations as strong or weak, or best practice statement when applicable. The Surviving Sepsis Guideline panel provided 93 statements on early management and resuscitation of patients with sepsis or septic shock. Overall, 32 were strong recommendations, 39 were weak recommendations, and 18 were best-practice statements. No recommendation was provided for four questions. Substantial agreement exists among a large cohort of international experts regarding many strong recommendations for the best care of patients with sepsis. Although a significant number of aspects of care have relatively weak support, evidence-based recommendations regarding the acute management of sepsis and septic shock are the foundation of improved outcomes for these critically ill patients with high mortality.
Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016.
Rhodes, Andrew; Evans, Laura E; Alhazzani, Waleed; Levy, Mitchell M; Antonelli, Massimo; Ferrer, Ricard; Kumar, Anand; Sevransky, Jonathan E; Sprung, Charles L; Nunnally, Mark E; Rochwerg, Bram; Rubenfeld, Gordon D; Angus, Derek C; Annane, Djillali; Beale, Richard J; Bellinghan, Geoffrey J; Bernard, Gordon R; Chiche, Jean-Daniel; Coopersmith, Craig; De Backer, Daniel P; French, Craig J; Fujishima, Seitaro; Gerlach, Herwig; Hidalgo, Jorge Luis; Hollenberg, Steven M; Jones, Alan E; Karnad, Dilip R; Kleinpell, Ruth M; Koh, Younsuck; Lisboa, Thiago Costa; Machado, Flavia R; Marini, John J; Marshall, John C; Mazuski, John E; McIntyre, Lauralyn A; McLean, Anthony S; Mehta, Sangeeta; Moreno, Rui P; Myburgh, John; Navalesi, Paolo; Nishida, Osamu; Osborn, Tiffany M; Perner, Anders; Plunkett, Colleen M; Ranieri, Marco; Schorr, Christa A; Seckel, Maureen A; Seymour, Christopher W; Shieh, Lisa; Shukri, Khalid A; Simpson, Steven Q; Singer, Mervyn; Thompson, B Taylor; Townsend, Sean R; Van der Poll, Thomas; Vincent, Jean-Louis; Wiersinga, W Joost; Zimmerman, Janice L; Dellinger, R Phillip
2017-03-01
To provide an update to "Surviving Sepsis Campaign Guidelines for Management of Sepsis and Septic Shock: 2012." A consensus committee of 55 international experts representing 25 international organizations was convened. Nominal groups were assembled at key international meetings (for those committee members attending the conference). A formal conflict-of-interest (COI) policy was developed at the onset of the process and enforced throughout. A stand-alone meeting was held for all panel members in December 2015. Teleconferences and electronic-based discussion among subgroups and among the entire committee served as an integral part of the development. The panel consisted of five sections: hemodynamics, infection, adjunctive therapies, metabolic, and ventilation. Population, intervention, comparison, and outcomes (PICO) questions were reviewed and updated as needed, and evidence profiles were generated. Each subgroup generated a list of questions, searched for best available evidence, and then followed the principles of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system to assess the quality of evidence from high to very low, and to formulate recommendations as strong or weak, or best practice statement when applicable. The Surviving Sepsis Guideline panel provided 93 statements on early management and resuscitation of patients with sepsis or septic shock. Overall, 32 were strong recommendations, 39 were weak recommendations, and 18 were best-practice statements. No recommendation was provided for four questions. Substantial agreement exists among a large cohort of international experts regarding many strong recommendations for the best care of patients with sepsis. Although a significant number of aspects of care have relatively weak support, evidence-based recommendations regarding the acute management of sepsis and septic shock are the foundation of improved outcomes for these critically ill patients with high mortality.
Lights All Askew: Systematics in Galaxy Images from Megaparsecs to Microns
NASA Astrophysics Data System (ADS)
Bradshaw, Andrew Kenneth
The stars and galaxies are not where they seem. In the process of imaging and measurement, the light from distant objects is distorted, blurred, and skewed by several physical effects on scales from megaparsecs to microns. Charge-coupled devices (CCDs) provide sensitive detection of this light, but introduce their own problems in the form of systematic biases. Images of these stars and galaxies are formed in CCDs when incoming light generates photoelectrons which are then collected in a pixel's potential well and measured as signal. However, these signal electrons can be diverted from purely parallel paths toward the pixel wells by transverse fields sourced by structural elements of the CCD, accidental imperfections in fabrication, or dynamic electric fields induced by other collected charges. These charge transport anomalies lead to measurable systematic errors in the images which bias cosmological inferences based on them. The physics of imaging therefore deserves thorough investigation, which is performed in the laboratory using a unique optical beam simulator and in computer simulations of charge transport. On top of detector systematics, there are often biases in the mathematical analysis of pixelized images; in particular, the location, shape, and orientation of stars and galaxies. Using elliptical Gaussians as a toy model for galaxies, it is demonstrated how small biases in the computed image moments lead to observable orientation patterns in modern survey data. Also presented are examples of the reduction of data and fitting of optical aberrations of images in the lab and on the sky which are modeled by physically or mathematically-motivated methods. Finally, end-to-end analysis of the weak gravitational lensing signal is presented using deep sky data as well as in N-body simulations. It is demonstrated how measured weak lens shear can be transformed by signal matched filters which aid in the detection of mass overdensities and separate signal from noise. A commonly-used decomposition of shear into two components, E- and B-modes, is thoroughly tested and both modes are shown to be useful in the detection of large scale structure. We find several astrophysical sources of B-mode and explain their apparent origin. The methods presented therefore offer an optimal way to filter weak gravitational shear into maps of large scale structure through the process of cosmic mass cartography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Gao, Y
Purpose: Dynamic bowtie filter is an innovative design capable of modulating the X-ray and balancing the flux in the detectors, and it introduces a new way of patient-specific CT scan optimizations. This study demonstrates the feasibility of performing fast Monte Carlo dose calculation for a type of dynamic bowtie filter for cone-beam CT (Liu et al. 2014 9(7) PloS one) using MIC coprocessors. Methods: The dynamic bowtie filter in question consists of a highly attenuating bowtie component (HB) and a weakly attenuating bowtie (WB). The HB is filled with CeCl3 solution and its surface is defined by a transcendental equation.more » The WB is an elliptical cylinder filled with air and immersed in the HB. As the scanner rotates, the orientation of WB remains the same with the static patient. In our Monte Carlo simulation, the HB was approximated by 576 boxes. The phantom was a voxelized elliptical cylinder composed of PMMA and surrounded by air (44cm×44cm×40cm, 1000×1000×1 voxels). The dose to the PMMA phantom was tallied with 0.15% statistical uncertainty under 100 kVp source. Two Monte Carlo codes ARCHER and MCNP-6.1 were compared. Both used double-precision. Compiler flags that may trade accuracy for speed were avoided. Results: The wall time of the simulation was 25.4 seconds by ARCHER on a 5110P MIC, 40 seconds on a X5650 CPU, and 523 seconds by the multithreaded MCNP on the same CPU. The high performance of ARCHER is attributed to the parameterized geometry and vectorization of the program hotspots. Conclusion: The dynamic bowtie filter modeled in this study is able to effectively reduce the dynamic range of the detected signals for the photon-counting detectors. With appropriate software optimization methods, the accelerator-based (MIC and GPU) Monte Carlo dose engines have shown good performance and can contribute to patient-specific CT scan optimizations.« less
Jimenez-Fonseca, P; Carmona-Bayonas, A; Calderon, C; Fontcuberta Boj, J; Font, C; Lecumberri, R; Monreal, M; Muñoz Martín, A J; Otero, R; Rubio, A; Ruiz-Artacho, P; Suarez Fernández, C; Colome, E; Pérez Segura, P
2017-08-01
Decision-making in cancer-related venous thromboembolism (VTE) is often founded on scant lines of evidence and weak recommendations. The aim of this work is to evaluate the percentage of agreement surrounding a series of statements about complex, clinically relevant, and highly uncertain aspects to formulate explicit action guidelines. Opinions were based on a structured questionnaire with appropriate scores and were agreed upon using a Delphi method. Questions were selected based on a list of recommendations with low evidence from the Spanish Society of Oncology Clinical Guideline for Thrombosis. The questionnaire was completed in two iterations by a multidisciplinary panel of experts in thrombosis. Of the 123 statements analyzed, the panel concurred on 22 (17%) and another 81 (65%) were agreed on by qualified majority, including important aspects of long-term and prolonged anticoagulation, major bleeding and rethrombosis management, treatment in special situations, catheter-related thrombosis and thromboprophylaxis. Among them, the panelists agreed the incidental events should be equated to symptomatic ones, long-term and extended use of full-dose low-molecular weight heparin, and concluded that the Khorana score is not sensitive enough to uphold an effective thromboprophylaxis strategy. Though the level of consensus varied depending on the scenario presented, overall, the iterative process achieved broad agreement as to the general treatment principles of cancer-associated VTE. Clinical validation of these statements in genuine practice conditions would be useful.
A comparative evaluation of genome assembly reconciliation tools.
Alhakami, Hind; Mirebrahim, Hamid; Lonardi, Stefano
2017-05-18
The majority of eukaryotic genomes are unfinished due to the algorithmic challenges of assembling them. A variety of assembly and scaffolding tools are available, but it is not always obvious which tool or parameters to use for a specific genome size and complexity. It is, therefore, common practice to produce multiple assemblies using different assemblers and parameters, then select the best one for public release. A more compelling approach would allow one to merge multiple assemblies with the intent of producing a higher quality consensus assembly, which is the objective of assembly reconciliation. Several assembly reconciliation tools have been proposed in the literature, but their strengths and weaknesses have never been compared on a common dataset. We fill this need with this work, in which we report on an extensive comparative evaluation of several tools. Specifically, we evaluate contiguity, correctness, coverage, and the duplication ratio of the merged assembly compared to the individual assemblies provided as input. None of the tools we tested consistently improved the quality of the input GAGE and synthetic assemblies. Our experiments show an increase in contiguity in the consensus assembly when the original assemblies already have high quality. In terms of correctness, the quality of the results depends on the specific tool, as well as on the quality and the ranking of the input assemblies. In general, the number of misassemblies ranges from being comparable to the best of the input assembly to being comparable to the worst of the input assembly.
Laur, Celia V; McNicholl, Tara; Valaitis, Renata; Keller, Heather H
2017-05-01
There is increasing awareness of the detrimental health impact of frailty on older adults and of the high prevalence of malnutrition in this segment of the population. Experts in these 2 arenas need to be cognizant of the overlap in constructs, diagnosis, and treatment of frailty and malnutrition. There is a lack of consensus regarding the definition of malnutrition and how it should be assessed. While there is consensus on the definition of frailty, there is no agreement on how it should be measured. Separate assessment tools exist for both malnutrition and frailty; however, there is intersection between concepts and measures. This narrative review highlights some of the intersections within these screening/assessment tools, including weight loss/decreased body mass, functional capacity, and weakness (handgrip strength). The potential for identification of a minimal set of objective measures to identify, or at least consider risk for both conditions, is proposed. Frailty and malnutrition have also been shown to result in similar negative health outcomes and consequently common treatment strategies have been studied, including oral nutritional supplements. While many of the outcomes of treatment relate to both concepts of frailty and malnutrition, research questions are typically focused on the frailty concept, leading to possible gaps or missed opportunities in understanding the effect of complementary interventions on malnutrition. A better understanding of how these conditions overlap may improve treatment strategies for frail, malnourished, older adults.
[Changes in the regulation and government of the health system. SESPAS report 2014].
Repullo, José R
2014-06-01
The economic and fiscal crisis of 2008 has erupted into the debate on the sustainability of health systems; some countries, such as Spain, have implemented strong policies of fiscal consolidation and austerity. The institutional framework and governance model of the national health system (NHS) after its devolution to regions in 2002 had significant weaknesses, which were not apparent in the rapid growth stage but which have been clearly visible since 2010. In this article, we describe the changes in government regulation from the national and NHS perspective: both general changes (clearly prompted by the economic authorities), and those more specifically addressed to healthcare. The Royal Decree-Law 16/2012 represents the centerpiece of austerity policies in healthcare but also implies a rupture with existing political consensus and a return to social security models. Our characterization of austerity in healthcare explores impacts on savings, services, and on the healthcare model itself, although the available information only allows some indications. The conclusions highlight the need to change the path of linear, rapid and radical budget cuts, providing a time-frame for implementing key reforms in terms of internal sustainability; to do so, it is appropriate to restore political and institutional consensus, to emphasize "clinical management" and divestment of inappropriate services (approach to the medical profession and its role as micro-manager), and to create frameworks of good governance and organizational innovations that support these structural reforms. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Structure and genomic organization of the human B1 receptor gene for kinins (BDKRB1).
Bachvarov, D R; Hess, J F; Menke, J G; Larrivée, J F; Marceau, F
1996-05-01
Two subtypes of mammalian bradykinin receptors, B1 and B2 (BDKRB1 and BDKRB2), have been defined based on their pharmacological properties. The B1 type kinin receptors have weak affinity for intact BK or Lys-BK but strong affinity for kinin metabolites without the C-terminal arginine (e.g., des-Arg9-BK and Lys-des-Arg9-BK, also called des-Arg10-kallidin), which are generated by kininase I. The B1 receptor expression is up-regulated following tissue injury and inflammation (hyperemia, exudation, hyperalgesia, etc.). In the present study, we have cloned and sequenced the gene encoding human B1 receptor from a human genomic library. The human B1 receptor gene contains three exons separated by two introns. The first and the second exon are noncoding, while the coding region and the 3'-flanking region are located entirely on the third exon. The exon-intron arrangement of the human B1 receptor gene shows significant similarity with the genes encoding the B2 receptor subtype in human, mouse, and rat. Sequence analysis of the 5'-flanking region revealed the presence of a consensus TATA box and of numerous candidate transcription factor binding sequences. Primer extension experiments have shown the existence of multiple transcription initiation sites situated downstream and upstream from the consensus TATA box. Genomic Southern blot analysis indicated that the human B1 receptor is encoded by a single-copy gene.
Research on signal processing method for total organic carbon of water quality online monitor
NASA Astrophysics Data System (ADS)
Ma, R.; Xie, Z. X.; Chu, D. Z.; Zhang, S. W.; Cao, X.; Wu, N.
2017-08-01
At present, there is no rapid, stable and effective approach of total organic carbon (TOC) measurement in the Marine environmental online monitoring field. Therefore, this paper proposes an online TOC monitor of chemiluminescence signal processing method. The weak optical signal detected by photomultiplier tube can be enhanced and converted by a series of signal processing module: phase-locked amplifier module, fourth-order band pass filter module and AD conversion module. After a long time of comparison test & measurement, compared with the traditional method, on the premise of sufficient accuracy, this chemiluminescence signal processing method can offer greatly improved measuring speed and high practicability for online monitoring.
Tunable plasmon-induced transparency effect based on self-asymmetric H-shaped resonators meta-atoms
NASA Astrophysics Data System (ADS)
Cheng, Zhaoxiang; Chen, Lin; Zang, Xiaofei; Cai, Bin; Peng, Yan; Zhu, Yiming
2015-03-01
We have proposed and demonstrated a tunable plasmon-induced transparency (PIT) effect from two ways, based on self-asymmetric H-shaped resonators (AHR) meta-atoms. The tunable PIT effect is realized via varying polarization angles and coupling distances. First, by proper design, transition from PIT mode to dipole mode is theoretically and experimentally demonstrated by simply adjusting the polarization angle. Also, the manipulation of ‘dark-mode’ resonance intensity from strong to weak is achieved by varying coupling strength with different distances, which provided insight into the magnetic coupling hybridization mechanism. Prospectively, due to its special tunable characteristics, the AHR meta-atoms may be widely used in slow light, filters and switch devices.
NASA Astrophysics Data System (ADS)
Sharma, Ramesh C.; Waigh, Thomas A.; Singh, Jagdish P.
2008-03-01
The optical phase conjugation signal in nearly nondegenerate four wave mixing was studied using a rhodamine 110 doped boric acid glass saturable absorber nonlinear medium. We have demonstrated a narrow band optical filter (2.56±0.15Hz) using an optical phase conjugation signal in the frequency modulation of a weak probe beam in the presence of two strong counterpropagating pump beams in rhodamine 110 doped boric acid glass thin films (10-4m). Both the pump beams and the probe beam are at a wavelength of 488nm (continuous-wave Ar+ laser). The probe beam frequency was detuned with a ramp signal using a piezoelectric transducer mirror.
Final S020 Skylab experiment report
NASA Technical Reports Server (NTRS)
Tousey, R.; Garrett, D. L.
1975-01-01
After the loss of the meteroid shield required using the solar scientific airlock to erect the sun shade, methods were improvised to operate the S020 experiment on EVA's. Almost no data was obtained in the wavelength range 10 to 110 A. From 110 to 280 A the spectra were 10 to 100 time less intense than expected. A probable cause in loss of instrument sensitivity is the contamination of the filters by the spacecraft coolant. A list of observed lines in presented. Although less data was obtained than expected, several lines not previously observed were recorded; and the spectra serve to confirm many very faintly observed weak lines recorded from sounding rockets by other experiments.
Bisciotti, G N; Volpi, P; Zini, R; Auci, A; Aprato, A; Belli, A; Bellistri, G; Benelli, P; Bona, S; Bonaiuti, D; Carimati, G; Canata, G L; Cassaghi, G; Cerulli, S; Delle Rose, G; Di Benedetto, P; Di Marzo, F; Di Pietto, F; Felicioni, L; Ferrario, L; Foglia, A; Galli, M; Gervasi, E; Gia, L; Giammattei, C; Guglielmi, A; Marioni, A; Moretti, B; Niccolai, R; Orgiani, N; Pantalone, A; Parra, F; Quaglia, A; Respizzi, F; Ricciotti, L; Pereira Ruiz, M T; Russo, A; Sebastiani, E; Tancredi, G; Tosi, F; Vuckovic, Z
2016-01-01
The nomenclature and the lack of consensus of clinical evaluation and imaging assessment in groin pain generate significant confusion in this field. The Groin Pain Syndrome Italian Consensus Conference has been organised in order to prepare a consensus document regarding taxonomy, clinical evaluation and imaging assessment for groin pain. A 1-day Consensus Conference was organised on 5 February 2016, in Milan (Italy). 41 Italian experts with different backgrounds participated in the discussion. A consensus document previously drafted was discussed, eventually modified, and finally approved by all members of the Consensus Conference. Unanimous consensus was reached concerning: (1) taxonomy (2) clinical evaluation and (3) imaging assessment. The synthesis of these 3 points is included in this paper. The Groin Pain Syndrome Italian Consensus Conference reached a consensus on three main points concerning the groin pain syndrome assessment, in an attempt to clarify this challenging medical problem. PMID:28890800
Klokker, Louise; Tugwell, Peter; Furst, Daniel E; Devoe, Dan; Williamson, Paula; Terwee, Caroline B; Suarez-Almazor, Maria E; Strand, Vibeke; Woodworth, Thasia; Leong, Amye L; Goel, Niti; Boers, Maarten; Brooks, Peter M; Simon, Lee S; Christensen, Robin
2017-12-01
Failure to report harmful outcomes in clinical research can introduce bias favoring a potentially harmful intervention. While core outcome sets (COS) are available for benefits in randomized controlled trials in many rheumatic conditions, less attention has been paid to safety in such COS. The Outcome Measures in Rheumatology (OMERACT) Filter 2.0 emphasizes the importance of measuring harms. The Safety Working Group was reestablished at the OMERACT 2016 with the objective to develop a COS for assessing safety components in trials across rheumatologic conditions. The safety issue has previously been discussed at OMERACT, but without a consistent approach to ensure harms were included in COS. Our methods include (1) identifying harmful outcomes in trials of interventions studied in patients with rheumatic diseases by a systematic literature review, (2) identifying components of safety that should be measured in such trials by use of a patient-driven approach including qualitative data collection and statistical organization of data, and (3) developing a COS through consensus processes including everyone involved. Members of OMERACT including patients, clinicians, researchers, methodologists, and industry representatives reached consensus on the need to continue the efforts on developing a COS for safety in rheumatology trials. There was a general agreement about the need to identify safety-related outcomes that are meaningful to patients, framed in terms that patients consider relevant so that they will be able to make informed decisions. The OMERACT Safety Working Group will advance the work previously done within OMERACT using a new patient-driven approach.
Grose, Jane; Richardson, Janet
2014-01-01
The uninterrupted supply of essential items for patient care is crucial for organizations that deliver health care. Many products central to health care are derived from natural resources such as oil and cotton, supplies of which are vulnerable to climate change and increasing global demand. The purpose of this study was to identify which items would have the greatest effect on service delivery and patient outcomes should they no longer be available. Using a consensus development approach, all items bought by one hospital, over one year, were subjected to a filtering process. Criteria were developed to identify at-risk products and assess them against specific risks and opportunities. Seventy-two items were identified for assessment against a range of potential impacts on service delivery and patient outcomes, from no impact to significant impact. Clinical and non-clinical participants rated the items. In the category of significant impact, consensus was achieved for 20 items out of 72. There were differences of opinion between clinical and non-clinical participants in terms of significant impact in relation to 18 items, suggesting that priority over purchasing decisions may create areas of conflict. Reducing reliance on critically scarce resources and reducing demand were seen as the most important criteria in developing sustainable procurement. The method was successful in identifying items vulnerable to supply chain interruption and should be repeated in other areas to test its ability to adapt to local priorities, and to assess how it functions in a variety of public and private settings.
Toupin-April, Karine; Barton, Jennifer; Fraenkel, Liana; Li, Linda; Grandpierre, Viviane; Guillemin, Francis; Rader, Tamara; Stacey, Dawn; Légaré, France; Jull, Janet; Petkovic, Jennifer; Scholte-Voshaar, Marieke; Welch, Vivian; Lyddiatt, Anne; Hofstetter, Cathie; De Wit, Maarten; March, Lyn; Meade, Tanya; Christensen, Robin; Gaujoux-Viala, Cécile; Suarez-Almazor, Maria E; Boonen, Annelies; Pohl, Christoph; Martin, Richard; Tugwell, Peter S
2015-12-01
Despite the importance of shared decision making for delivering patient-centered care in rheumatology, there is no consensus on how to measure its process and outcomes. The aim of this Outcome Measures in Rheumatology (OMERACT) working group is to determine the core set of domains for measuring shared decision making in intervention studies in adults with osteoarthritis (OA), from the perspectives of patients, health professionals, and researchers. We followed the OMERACT Filter 2.0 method to develop a draft core domain set by (1) forming an OMERACT working group; (2) conducting a review of domains of shared decision making; and (3) obtaining opinions of all those involved using a modified nominal group process held at a session activity at the OMERACT 12 meeting. In all, 26 people from Europe, North America, and Australia, including 5 patient research partners, participated in the session activity. Participants identified the following domains for measuring shared decision making to be included as part of the draft core set: (1) identifying the decision, (2) exchanging information, (3) clarifying views, (4) deliberating, (5) making the decision, (6) putting the decision into practice, and (7) assessing the effect of the decision. Contextual factors were also suggested. We proposed a draft core set of shared decision-making domains for OA intervention research studies. Next steps include a workshop at OMERACT 13 to reach consensus on these proposed domains in the wider OMERACT group, as well as to detail subdomains and assess instruments to develop a core outcome measurement set.
Toupin April, Karine; Barton, Jennifer; Fraenkel, Liana; Li, Linda; Grandpierre, Viviane; Guillemin, Francis; Rader, Tamara; Stacey, Dawn; Légaré, France; Jull, Janet; Petkovic, Jennifer; Scholte Voshaar, Marieke; Welch, Vivian; Lyddiatt, Anne; Hofstetter, Cathie; De Wit, Maarten; March, Lyn; Meade, Tanya; Christensen, Robin; Gaujoux-Viala, Cécile; Suarez-Almazor, Maria E.; Boonen, Annelies; Pohl, Christoph; Martin, Richard; Tugwell, Peter
2015-01-01
Objective Despite the importance of shared decision making for delivering patient-centred care in rheumatology, there is no consensus on how to measure its process and outcomes. The aim of this OMERACT working group is to determine the core set of domains for measuring shared decision making in intervention studies in adults with osteoarthritis (OA), from the perspective of patients, health professionals and researchers. Methods We followed the OMERACT Filter 2.0 to develop a draft core domain set, which consisted of: (i) forming an OMERACT working group; (ii) conducting a review of domains of shared decision making; and (iii) obtaining the opinions of stakeholders using a modified nominal group process held at a session activity at the OMERACT 2014 meeting. Results 26 stakeholders from Europe, North America and Australia, including 5 patient research partners, participated in the session activity. Participants identified the following domains for measuring shared decision making to be included as part of the Draft Core Set: 1) Identifying the decision; 2) Exchanging Information; 3) Clarifying views; 4) Deliberating; 5) Making the decision; 6) Putting the decision into practice; and 7) Assessing the impact of the decision. Contextual factors were also suggested. Conclusion We propose a Draft Core Set of shared decision making domains for OA intervention research studies. Next steps include a workshop at OMERACT 2016 to reach consensus on these proposed domains in the wider OMERACT group, as well as detail sub-domains and assess instruments to develop a Core Outcome Measurement Set. PMID:25877502
NASA Astrophysics Data System (ADS)
Jacobs, P.; Cook, J.; Nuccitelli, D.
2014-12-01
An overwhelming scientific consensus exists on the issue of anthropogenic climate change. Unfortunately, public perception of expert agreement remains low- only around 1 in 10 Americans correctly estimates the actual level of consensus on the topic. Moreover, several recent studies have demonstrated the pivotal role that perceived consensus plays in the public's acceptance of key scientific facts about environmental problems, as well as their willingness to support policy to address them. This "consensus gap", between the high level of scientific agreement vs. the public's perception of it, has led to calls for increased consensus messaging. However this call has been challenged by a number of different groups: climate "skeptics" in denial about the existence and validity of the consensus; some social science researchers and journalists who believe that such messages will be ineffective or counterproductive; and even some scientists and science advocates who downplay the value of consensus in science generally. All of these concerns can be addressed by effectively communicating the role of consensus within science to the public, as well as the conditions under which consensus is likely to be correct. Here, we demonstrate that the scientific consensus on anthropogenic climate change satisfies these conditions, and discuss past examples of purported consensus that failed or succeeded to satisfy them as well. We conclude by discussing the way in which scientific consensus is interpreted by the public, and how consensus messaging can improve climate literacy.
The formation of graben morphology in the Dead Sea Fault, and its implications
NASA Astrophysics Data System (ADS)
Ben-Avraham, Zvi; Katsman, Regina
2015-09-01
The Dead Sea Fault (DSF) is a 1000 km long continental transform. It forms a narrow and elongated valley with uplifted shoulders showing an east-west asymmetry, which is not common in other continental transforms. This topography may have strongly affected the course of human history. Several papers addressed the geomorphology of the DSF, but there is still no consensus with respect to the dominant mechanism of its formation. Our thermomechanical modeling demonstrates that existence of a transform prior to the rifting predefined high strain softening on the faults in the strong upper crust and created a precursor weak zone localizing deformations in the subsequent transtensional period. Together with a slow rate of extension over the Arabian plate, they controlled a narrow asymmetric morphology of the fault. This rift pattern was enhanced by a fast deposition of evaporites from the Sedom Lagoon, which occupied the rift depression for a short time period.
The feasibility of harmonizing gluten ELISA measurements.
Rzychon, Malgorzata; Brohée, Marcel; Cordeiro, Fernando; Haraszi, Reka; Ulberth, Franz; O'Connor, Gavin
2017-11-01
Many publications have highlighted that routine ELISA methods do not give rise to equivalent gluten content measurement results. In this study, we assess this variation between results and its likely impact on the enforcement of the EU gluten-free legislation. This study systematically examines the feasibility of harmonizing gluten ELISA assays by the introduction of: a common extraction procedure; a common calibrator, such as a pure gluten extract and an incurred matrix material. The comparability of measurements is limited by a weak correlation between kit results caused by differences in the selectivity of the methods. This lack of correlation produces bias that cannot be corrected by using reference materials alone. The use of a common calibrator reduced the between-assay variability to some extent, but variation due to differences in selectivity of the assays was unaffected. Consensus on robust markers and their conversion to "gluten content" are required. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Associations between school-level environment and science classroom environment in secondary schools
NASA Astrophysics Data System (ADS)
Dorman, Jeffrey P.; Fraser, Barry J.; McRobbie, Campbell J.
1995-09-01
This article describes a study of links between school environment and science classroom environment. Instruments to assess seven dimensions of school environment (viz., Empowerment, Student Support, Affiliation, Professional Interest, Mission Consensus, Resource Adequacy and Work Pressure) and seven dimensions of classroom environment (viz., Student Affiliation, Interactions, Cooperation, Task Orientation, Order & Organisation, Individualisati n and Teacher Control) in secondary school science classrooms were developed and validated. The study involved a sample of 1,318 students in 64 year 9 and year 12 science classes and 128 teachers of science in Australian secondary schools. Using the class mean as the unit of analysis for student data, associations between school and classroom environment were investigated using simple, multiple and canonical correlational analyses. In general, results indicated weak relationships between school and classroom environments and they reinforced the view that characteristics of the school environment are not transmitted automatically into science classrooms.
Tactical lighting in special operations medicine: survey of current preferences.
Calvano, Christopher J; Enzenauer, Robert W; Eisnor, Derek L; Laporta, Anthony J
2013-01-01
Success in Special Operations Forces medicine (SOFMED) is dependent on maximizing visual capability without compromising the provider or casualty position when under fire. There is no single ideal light source suitable for varied SOFMED environments. We present the results of an online survey of Special Operations Medical Operators in an attempt to determine strengths and weaknesses of current systems. There was no consensus ideal hue for tactical illumination. Most Operators own three or more lights, and most lights were not night vision compatible. Most importantly, nearly 25% of respondents reported that lighting issues contributed to a poor casualty outcome; conversely, a majority (50 of 74) stated their system helped prevent a poor outcome. Based on the results of this initial survey, we can affirm that the design and choice of lighting is critical to SOFMED success. We are conducting ongoing studies to further define ideal systems for tactical applications including field, aviation, and marine settings. 2013.
Lower limb muscle impairment in myotonic dystrophy type 1: the need for better guidelines.
Petitclerc, Émilie; Hébert, Luc J; Desrosiers, Johanne; Gagnon, Cynthia
2015-04-01
In myotonic dystrophy type 1 (DM1), leg muscle weakness is a major impairment. There are challenges to obtaining a clear portrait of muscle strength impairment. A systematic literature review was conducted on lower limb strength impairment in late-onset and adult phenotypes to document variables which affect strength measurement. Thirty-two articles were reviewed using the COSMIN guidelines. Only a third of the studies described a reproducible protocol. Only 2 muscle groups have documented reliability for quantitative muscle testing and only 1 total score for manual muscle testing. Variables affecting muscle strength impairment are not described in most studies. This review illustrates the variability in muscle strength assessment in relation to DM1 characteristics and the questionable validity of the results with regard to undocumented methodological properties. There is therefore a clear need to adopt a consensus on the use of a standardized muscle strength assessment protocol. © 2015 Wiley Periodicals, Inc.
Diagnosing dehydration? Blend evidence with clinical observations.
Armstrong, Lawrence E; Kavouras, Stavros A; Walsh, Neil P; Roberts, William O
2016-11-01
The purpose of the review is to provide recommendations to improve clinical decision-making based on the strengths and weaknesses of commonly used hydration biomarkers and clinical assessment methods. There is widespread consensus regarding treatment, but not the diagnosis of dehydration. Even though it is generally accepted that a proper clinical diagnosis of dehydration can only be made biochemically rather than relying upon clinical signs and symptoms, no gold standard biochemical hydration index exists. Other than clinical biomarkers in blood (i.e., osmolality and blood urea nitrogen/creatinine) and in urine (i.e., osmolality and specific gravity), blood pressure assessment and clinical symptoms in the eye (i.e., tear production and palpitating pressure) and the mouth (i.e., thirst and mucous wetness) can provide important information for diagnosing dehydration. We conclude that clinical observations based on a combination of history, physical examination, laboratory values, and clinician experience remain the best approach to the diagnosis of dehydration.
Bujarski, Spencer; Ray, Lara A.
2016-01-01
In spite of high prevalence and disease burden, scientific consensus on the etiology and treatment of Alcohol Use Disorder (AUD) has yet to be reached. The development and utilization of experimental psychopathology paradigms in the human laboratory represents a cornerstone of AUD research. In this review, we describe and critically evaluate the major experimental psychopathology paradigms developed for AUD, with an emphasis on their implications, strengths, weaknesses, and methodological considerations. Specifically we review alcohol administration, self-administration, cue-reactivity, and stress-reactivity paradigms. We also provide an introduction to the application of experimental psychopathology methods to translational research including genetics, neuroimaging, pharmacological and behavioral treatment development, and translational science. Through refining and manipulating key phenotypes of interest, these experimental paradigms have the potential to elucidate AUD etiological factors, improve the efficiency of treatment developments, and refine treatment targets thus advancing precision medicine. PMID:27266992
Bujarski, Spencer; Ray, Lara A
2016-11-01
In spite of high prevalence and disease burden, scientific consensus on the etiology and treatment of Alcohol Use Disorder (AUD) has yet to be reached. The development and utilization of experimental psychopathology paradigms in the human laboratory represents a cornerstone of AUD research. In this review, we describe and critically evaluate the major experimental psychopathology paradigms developed for AUD, with an emphasis on their implications, strengths, weaknesses, and methodological considerations. Specifically we review alcohol administration, self-administration, cue-reactivity, and stress-reactivity paradigms. We also provide an introduction to the application of experimental psychopathology methods to translational research including genetics, neuroimaging, pharmacological and behavioral treatment development, and translational science. Through refining and manipulating key phenotypes of interest, these experimental paradigms have the potential to elucidate AUD etiological factors, improve the efficiency of treatment developments, and refine treatment targets thus advancing precision medicine. Copyright © 2016 Elsevier Ltd. All rights reserved.
GMO Reignited in Science but Not in Law: A Flawed Framework Fuels France's Stalemate.
Robbins, Patricia B
2014-01-01
Following a statement released by a multitude of prominent scientists contesting the idea that there is a consensus on the safety of genetically modified organisms ("GMO"), this article addresses the European Union's ("EU") GMO regulatory framework, which has reluctantly permitted France to maintain an illegal ban on. MON8 10 for over a decade now. It notes that while the statement did nothing more than reignite the debate on GMO, much could and should be done to improve the framework to accommodate for the lack of true scientific understanding about the effects of GMO. This article identifies the specific areas of weakness in the EU GMO regulatory framework and recommends specific alterations. It concludes that although France's MON810 ban is illegal under existing law, the country's fears are neither unfounded nor unsupported and that the EU should work to alter its existing legal structure to parallel today's scientific uncertainty regarding GMO safety.
Airliner cabin ozone: An updated review. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melton, C.E.
1989-12-01
The recent literature pertaining to ozone contamination of airliner cabins is reviewed. Measurements in airliner cabins without filters showed that ozone levels were about 50 percent of atmospheric ozone. Filters were about 90 percent effective in destroying ozone. Ozone (0.12 to 0.14 ppmv) caused mild subjective respiratory irritation in exercising men, but 0.20 to 0.30 ppmv did not have adverse effects on patients with chronic heart or lung disease. Ozone (1.0 to 2.0 ppmv) decreased survival time of influenza-infected rats and mice and suppressed the capacity of lung macrophages to destroy Listeria. Airway responses to ozone are divided into anmore » early parasympathetically mediated bronchoconstrictive phase and a later histamine-mediated congestive phase. Evidence indicates that intracellular free radicals are responsible for ozone damage and that the damage may be spread to other cells by toxic intermediate products: Antioxidants provide some protection to cells in vitro from ozone but dietary intake of antioxidant vitamins by humans has only a weak effect, if any. This review indicates that earlier findings regarding ozone toxicity do not need to be corrected. Compliance with existing FAA ozone standards appears to provide adequate protection to aircrews and passengers.« less
Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model.
Xianfang, Wang; Junmei, Wang; Xiaolei, Wang; Yue, Zhang
2017-01-01
The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server.
Predicting the Types of Ion Channel-Targeted Conotoxins Based on AVC-SVM Model
Xiaolei, Wang
2017-01-01
The conotoxin proteins are disulfide-rich small peptides. Predicting the types of ion channel-targeted conotoxins has great value in the treatment of chronic diseases, epilepsy, and cardiovascular diseases. To solve the problem of information redundancy existing when using current methods, a new model is presented to predict the types of ion channel-targeted conotoxins based on AVC (Analysis of Variance and Correlation) and SVM (Support Vector Machine). First, the F value is used to measure the significance level of the feature for the result, and the attribute with smaller F value is filtered by rough selection. Secondly, redundancy degree is calculated by Pearson Correlation Coefficient. And the threshold is set to filter attributes with weak independence to get the result of the refinement. Finally, SVM is used to predict the types of ion channel-targeted conotoxins. The experimental results show the proposed AVC-SVM model reaches an overall accuracy of 91.98%, an average accuracy of 92.17%, and the total number of parameters of 68. The proposed model provides highly useful information for further experimental research. The prediction model will be accessed free of charge at our web server. PMID:28497044
Hwang, S H; Yi, T W; Cho, K H; Lee, I M; Yoon, C S
2011-09-01
To test a performance of the microbiological safety cabinets (MSCs) according to the type of MSCs in microbial laboratories. Tests were carried out to assess the performance of 31 MSCs in 14 different facilities, including six different biological test laboratories in six hospitals and eight different laboratories in three universities. The following tests were performed on the MSCs: the downflow test, intake velocity test, high-efficiency particulate air filter leak test and the airflow smoke pattern test. These performance tests were carried out in accordance with the standard procedures. Only 23% of Class II A1 (8), A2 (19) and unknown MSCs (4) passed these performance tests. The main reasons for the failure of MSCs were inappropriate intake velocity (65%), leakage in the HEPA filter sealing (50%), unbalanced airflow smoke pattern in the cabinets (39%) and inappropriate downflow (27%). This study showed that routine checks of MSCs are important to detect and strengthen the weak spots that frequently develop, as observed during the evaluation of the MSCs of various institutions. Routine evaluation and maintenance of MSCs are critical for optimizing performance. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
Computed tomographic images using tube source of x rays: interior properties of the material
NASA Astrophysics Data System (ADS)
Rao, Donepudi V.; Takeda, Tohoru; Itai, Yuji; Seltzer, S. M.; Hubbell, John H.; Zeniya, Tsutomu; Akatsuka, Takao; Cesareo, Roberto; Brunetti, Antonio; Gigante, Giovanni E.
2002-01-01
An image intensifier based computed tomography scanner and a tube source of x-rays are used to obtain the images of small objects, plastics, wood and soft materials in order to know the interior properties of the material. A new method is developed to estimate the degree of monochromacy, total solid angle, efficiency and geometrical effects of the measuring system and the way to produce monoenergetic radiation. The flux emitted by the x-ray tube is filtered using the appropriate filters at the chosen optimum energy and reasonable monochromacy is achieved and the images are acceptably distinct. Much attention has been focused on the imaging of small objects of weakly attenuating materials at optimum value. At optimum value it is possible to calculate the three-dimensional representation of inner and outer surfaces of the object. The image contrast between soft materials could be significantly enhanced by optimal selection of the energy of the x-rays by Monte Carlo methods. The imaging system is compact, reasonably economic, has a good contrast resolution, simple operation and routine availability and explores the use of optimizing tomography for various applications.
Asian consensus on irritable bowel syndrome.
Gwee, Kok-Ann; Bak, Young-Tae; Ghoshal, Uday Chand; Gonlachanvit, Sutep; Lee, Oh Young; Fock, Kwong Ming; Chua, Andrew Seng Boon; Lu, Ching-Liang; Goh, Khean-Lee; Kositchaiwat, Chomsri; Makharia, Govind; Park, Hyo-Jin; Chang, Full-Young; Fukudo, Shin; Choi, Myung-Gyu; Bhatia, Shobna; Ke, Meiyun; Hou, Xiaohua; Hongo, Michio
2010-07-01
Many of the ideas on irritable bowel syndrome (IBS) are derived from studies conducted in Western societies. Their relevance to Asian societies has not been critically examined. Our objectives were to bring to attention important data from Asian studies, articulate the experience and views of our Asian experts, and provide a relevant guide on this poorly understood condition for doctors and scientists working in Asia. A multinational group of physicians from Asia with special interest in IBS raised statements on IBS pertaining to symptoms, diagnosis, epidemiology, infection, pathophysiology, motility, management, and diet. A modified Delphi approach was employed to present and grade the quality of evidence, and determine the level of agreement. We observed that bloating and symptoms associated with meals were prominent complaints among our IBS patients. In the majority of our countries, we did not observe a female predominance. In some Asian populations, the intestinal transit times in healthy and IBS patients appear to be faster than those reported in the West. High consultation rates were observed, particularly in the more affluent countries. There was only weak evidence to support the perception that psychological distress determines health-care seeking. Dietary factors, in particular, chili consumption and the high prevalence of lactose malabsorption, were perceived to be aggravating factors, but the evidence was weak. This detailed compilation of studies from different parts of Asia, draws attention to Asian patients' experiences of IBS.
Weak Negative and Positive Selection and the Drift Load at Splice Sites
Denisov, Stepan V.; Bazykin, Georgii A.; Sutormin, Roman; Favorov, Alexander V.; Mironov, Andrey A.; Gelfand, Mikhail S.; Kondrashov, Alexey S.
2014-01-01
Splice sites (SSs) are short sequences that are crucial for proper mRNA splicing in eukaryotic cells, and therefore can be expected to be shaped by strong selection. Nevertheless, in mammals and in other intron-rich organisms, many of the SSs often involve nonconsensus (Nc), rather than consensus (Cn), nucleotides, and beyond the two critical nucleotides, the SSs are not perfectly conserved between species. Here, we compare the SS sequences between primates, and between Drosophila fruit flies, to reveal the pattern of selection acting at SSs. Cn-to-Nc substitutions are less frequent, and Nc-to-Cn substitutions are more frequent, than neutrally expected, indicating, respectively, negative and positive selection. This selection is relatively weak (1 < |4Nes| < 4), and has a similar efficiency in primates and in Drosophila. Within some nucleotide positions, the positive selection in favor of Nc-to-Cn substitutions is weaker than the negative selection maintaining already established Cn nucleotides; this difference is due to site-specific negative selection favoring current Nc nucleotides. In general, however, the strength of negative selection protecting the Cn alleles is similar in magnitude to the strength of positive selection favoring replacement of Nc alleles, as expected under the simple nearly neutral turnover. In summary, although a fraction of the Nc nucleotides within SSs is maintained by selection, the abundance of deleterious nucleotides in this class suggests a substantial genome-wide drift load. PMID:24966225
Risso-Gill, Isabelle; McKee, Martin; Coker, Richard; Piot, Peter; Legido-Quigley, Helena
2014-07-01
Myanmar has undergone a remarkable political transformation in the last 2 years, with its leadership voluntarily transitioning from an isolated military regime to a quasi-civilian government intent on re-engaging with the international community. Decades of underinvestment have left the country underdeveloped with a fragile health system and poor health outcomes. International aid agencies have found engagement with the Myanmar government difficult but this is changing rapidly and it is opportune to consider how Myanmar can engage with the global health system strengthening (HSS) agenda. Nineteen semi-structured, face-to-face interviews were conducted with representatives from international agencies working in Myanmar to capture their perspectives on HSS following political reform. They explored their perceptions of HSS and the opportunities for implementation. Participants reported challenges in engaging with government, reflecting the disharmony between actors, economic sanctions and barriers to service delivery due to health system weaknesses and bureaucracy. Weaknesses included human resources, data and medical products/infrastructure and logistical challenges. Agencies had mixed views of health system finance and governance, identifying problems and also some positive aspects. There is little consensus on how HSS should be approached in Myanmar, but much interest in collaborating to achieve it. Despite myriad challenges and concerns, participants were generally positive about the recent political changes, and remain optimistic as they engage in HSS activities with the government.
Sprung, Charles L; Truog, Robert D; Curtis, J Randall; Joynt, Gavin M; Baras, Mario; Michalsen, Andrej; Briegel, Josef; Kesecioglu, Jozef; Efferen, Linda; De Robertis, Edoardo; Bulpa, Pierre; Metnitz, Philipp; Patil, Namrata; Hawryluck, Laura; Manthous, Constantine; Moreno, Rui; Leonard, Sara; Hill, Nicholas S; Wennberg, Elisabet; McDermid, Robert C; Mikstacki, Adam; Mularski, Richard A; Hartog, Christiane S; Avidan, Alexander
2014-10-15
Great differences in end-of-life practices in treating the critically ill around the world warrant agreement regarding the major ethical principles. This analysis determines the extent of worldwide consensus for end-of-life practices, delineates where there is and is not consensus, and analyzes reasons for lack of consensus. Critical care societies worldwide were invited to participate. Country coordinators were identified and draft statements were developed for major end-of-life issues and translated into six languages. Multidisciplinary responses using a web-based survey assessed agreement or disagreement with definitions and statements linked to anonymous demographic information. Consensus was prospectively defined as >80% agreement. Definitions and statements not obtaining consensus were revised based on comments of respondents, and then translated and redistributed. Of the initial 1,283 responses from 32 countries, consensus was found for 66 (81%) of the 81 definitions and statements; 26 (32%) had >90% agreement. With 83 additional responses to the original questionnaire (1,366 total) and 604 responses to the revised statements, consensus could be obtained for another 11 of the 15 statements. Consensus was obtained for informed consent, withholding and withdrawing life-sustaining treatment, legal requirements, intensive care unit therapies, cardiopulmonary resuscitation, shared decision making, medical and nursing consensus, brain death, and palliative care. Consensus was obtained for 77 of 81 (95%) statements. Worldwide consensus could be developed for the majority of definitions and statements about end-of-life practices. Statements achieving consensus provide standards of practice for end-of-life care; statements without consensus identify important areas for future research.
Waggoner, Jane; Carline, Jan D; Durning, Steven J
2016-05-01
The authors of this article reviewed the methodology of three common consensus methods: nominal group process, consensus development panels, and the Delphi technique. The authors set out to determine how a majority of researchers are conducting these studies, how they are analyzing results, and subsequently the manner in which they are reporting their findings. The authors conclude with a set of guidelines and suggestions designed to aid researchers who choose to use the consensus methodology in their work.Overall, researchers need to describe their inclusion criteria. In addition to this, on the basis of the current literature the authors found that a panel size of 5 to 11 members was most beneficial across all consensus methods described. Lastly, the authors agreed that the statistical analyses done in consensus method studies should be as rigorous as possible and that the predetermined definition of consensus must be included in the ultimate manuscript. More specific recommendations are given for each of the three consensus methods described in the article.
Diamond, Ivan R; Grant, Robert C; Feldman, Brian M; Pencharz, Paul B; Ling, Simon C; Moore, Aideen M; Wales, Paul W
2014-04-01
To investigate how consensus is operationalized in Delphi studies and to explore the role of consensus in determining the results of these studies. Systematic review of a random sample of 100 English language Delphi studies, from two large multidisciplinary databases [ISI Web of Science (Thompson Reuters, New York, NY) and Scopus (Elsevier, Amsterdam, NL)], published between 2000 and 2009. About 98 of the Delphi studies purported to assess consensus, although a definition for consensus was only provided in 72 of the studies (64 a priori). The most common definition for consensus was percent agreement (25 studies), with 75% being the median threshold to define consensus. Although the authors concluded in 86 of the studies that consensus was achieved, consensus was only specified a priori (with a threshold value) in 42 of these studies. Achievement of consensus was related to the decision to stop the Delphi study in only 23 studies, with 70 studies terminating after a specified number of rounds. Although consensus generally is felt to be of primary importance to the Delphi process, definitions of consensus vary widely and are poorly reported. Improved criteria for reporting of methods of Delphi studies are required. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, Qing; Zhu, Sijia; Yan, Han; Wu, Wenqian
2008-03-01
Parallel light projection method for the diameter measurement is to project the workpiece to be measured on the photosensitive units of CCD, but the original signal output from CCD cannot be directly used for counting or measurement. The weak signal with high-frequency noise should be filtered and amplified firstly. This paper introduces RC low-pass filter and multiple feed-back second-order low-pass filter with infinite gain. Additionally there is always dispersion on the light band and the output signal has a transition between the irradiant area and the shadow, because of the instability of the light source intensity and the imperfection of the light system adjustment. To obtain exactly the shadow size related to the workpiece diameter, binary-value processing is necessary to achieve a square wave. Comparison method and differential method can be adopted for binary-value processing. There are two ways to decide the threshold value when using voltage comparator: the fixed level method and the floated level method. The latter has a high accuracy. Deferential method is to output two spike pulses with opposite pole by the rising edge and the failing edge of the video signal related to the differential circuit firstly, then the rising edge of the signal output from the differential circuit is acquired by half-wave rectifying circuit. After traveling through the zero passing comparator and the maintain- resistance edge trigger, the square wave which indicates the measured size is acquired at last. And then it is used for filling through standard pulses and for counting through the counter. Data acquisition and information processing is accomplished by the computer and the control software. This paper will introduce in detail the design and analysis of the filter circuit, binary-value processing circuit and the interface circuit towards the computer.
Shoeib, Mahiba; Schuster, Jasmin; Rauert, Cassandra; Su, Ky; Smyth, Shirley-Anne; Harner, Tom
2016-11-01
The potential of wastewater treatment plants (WWTPs) to act as sources of poly and perfluoroalkyl substances (PFASs), volatile methyl siloxanes (VMSs) and organic UV-filters to the atmosphere was investigated. Target compounds included: PFASs (fluorotelomer alcohols (FTOHs), perfluorooctane sulfonamides/sulfonamidoethanols (FOSAs/FOSEs), perfluroalkyl sulfonic acids (PFSAs) and perfluroalkyl carboxylic acids (PFCAs)), cyclic VMSs (D3 to D6), linear VMSs (L3 to L5) and eight UV-filters. Emissions to air were assessed at eight WWTPs using paired sorbent-impregnated polyurethane foam passive air samplers, deployed during summer 2013 and winter 2014. Samplers were deployed on-site above the active tank and off-site as a reference. Several types of WWTPs were investigated: secondary activated sludge in urban areas (UR-AS), secondary extended aeration in towns (TW-EA) and facultative lagoons in rural areas (RU-LG). The concentrations of target compounds in air were ∼1.7-35 times higher on-site compared to the corresponding off-site location. Highest concentrations in air were observed at UR-AS sites while the lowest were at RU-LG. Higher air concentrations (∼2-9 times) were observed on-site during summer compared to winter, possibly reflecting enhanced volatilization due to higher wastewater temperatures or differences in influent wastewater concentrations. A significant positive correlation was obtained between concentrations in air and WWTP characteristics (influent flow rate and population in the catchment of the WWTP); whereas a weak negative correlation was obtained with hydraulic retention time. Emissions to air were estimated using a simplified dispersion model. Highest emissions to air were seen at the UR-AS locations. Emissions to air (g/year/tank) were highest for VMSs (5000-112,000) followed by UV-filters (16-2000) then ΣPFASs (10-110). Copyright © 2016. Published by Elsevier Ltd.
No. 347-Obstetric Management at Borderline Viability.
Ladhani, Noor Niyar N; Chari, Radha S; Dunn, Michael S; Jones, Griffith; Shah, Prakesh; Barrett, Jon F R
2017-09-01
The primary objective of this guideline was to develop consensus statements to guide clinical practice and recommendations for obstetric management of a pregnancy at borderline viability, currently defined as prior to 25+6 weeks. Clinicians involved in the obstetric management of women whose fetus is at the borderline of viability. Women presenting for possible birth at borderline viability. This document presents a summary of the literature and a general consensus on the management of pregnancies at borderline viability, including maternal transfer and consultation, administration of antenatal corticosteroids and magnesium sulfate, fetal heart rate monitoring, and considerations in mode of delivery. Medline, EMBASE, and Cochrane databases were searched using the following keywords: extreme prematurity, borderline viability, preterm, pregnancy, antenatal corticosteroids, mode of delivery. The results were then studied, and relevant articles were reviewed. The references of the reviewed studies were also searched, as were documents citing pertinent studies. The evidence was then presented at a consensus meeting, and statements were developed. The content and recommendations were developed by the consensus group from the fields of Maternal-Fetal Medicine, Neonatology, Perinatal Nursing, Patient Advocacy, and Ethics. The quality of evidence was rated using criteria described in the Grading of Recommendations Assessment, Development and Evaluation methodology framework (reference 1). The Board of the Society of Obstetricians and Gynaecologists of Canada approved the final draft for publication. The quality of evidence was rated using the criteria described in the Grading of Recommendations, Assessment, Development, and Evaluation methodology framework. The interpretation of strong and weak recommendations is described later. The Summary of Findings is available upon request. A multidisciplinary approach should be used in counselling women and families at borderline viability. The impact of obstetric interventions in the improvement of neonatal outcomes is suggested in the literature, and if active resuscitation is intended, then active obstetric interventions should be considered. Evidence will be reviewed 5 years after publication to decide whether all or part of the guideline should be updated. However, if important new evidence is published prior to the 5-year cycle, the review process may be accelerated for a more rapid update of some recommendations. This guideline was developed with resources funded by the Society of Obstetricians and Gynaecologists of Canada and the Women and Babies Program at Sunnybrook Health Sciences Centre. Copyright © 2017 The Society of Obstetricians and Gynaecologists of Canada/La Société des obstétriciens et gynécologues du Canada. Published by Elsevier Inc. All rights reserved.
Galletly, Cherrie; Castle, David; Dark, Frances; Humberstone, Verity; Jablensky, Assen; Killackey, Eóin; Kulkarni, Jayashri; McGorry, Patrick; Nielssen, Olav; Tran, Nga
2016-05-01
This guideline provides recommendations for the clinical management of schizophrenia and related disorders for health professionals working in Australia and New Zealand. It aims to encourage all clinicians to adopt best practice principles. The recommendations represent the consensus of a group of Australian and New Zealand experts in the management of schizophrenia and related disorders. This guideline includes the management of ultra-high risk syndromes, first-episode psychoses and prolonged psychoses, including psychoses associated with substance use. It takes a holistic approach, addressing all aspects of the care of people with schizophrenia and related disorders, not only correct diagnosis and symptom relief but also optimal recovery of social function. The writing group planned the scope and individual members drafted sections according to their area of interest and expertise, with reference to existing systematic reviews and informal literature reviews undertaken for this guideline. In addition, experts in specific areas contributed to the relevant sections. All members of the writing group reviewed the entire document. The writing group also considered relevant international clinical practice guidelines. Evidence-based recommendations were formulated when the writing group judged that there was sufficient evidence on a topic. Where evidence was weak or lacking, consensus-based recommendations were formulated. Consensus-based recommendations are based on the consensus of a group of experts in the field and are informed by their agreement as a group, according to their collective clinical and research knowledge and experience. Key considerations were selected and reviewed by the writing group. To encourage wide community participation, the Royal Australian and New Zealand College of Psychiatrists invited review by its committees and members, an expert advisory committee and key stakeholders including professional bodies and special interest groups. The clinical practice guideline for the management of schizophrenia and related disorders reflects an increasing emphasis on early intervention, physical health, psychosocial treatments, cultural considerations and improving vocational outcomes. The guideline uses a clinical staging model as a framework for recommendations regarding assessment, treatment and ongoing care. This guideline also refers its readers to selected published guidelines or statements directly relevant to Australian and New Zealand practice. This clinical practice guideline for the management of schizophrenia and related disorders aims to improve care for people with these disorders living in Australia and New Zealand. It advocates a respectful, collaborative approach; optimal evidence-based treatment; and consideration of the specific needs of those in adverse circumstances or facing additional challenges. © The Royal Australian and New Zealand College of Psychiatrists 2016.
NASA Astrophysics Data System (ADS)
Donati, J.-F.; Hébrard, E.; Hussain, G.; Moutou, C.; Grankin, K.; Boisse, I.; Morin, J.; Gregory, S. G.; Vidotto, A. A.; Bouvier, J.; Alencar, S. H. P.; Delfosse, X.; Doyon, R.; Takami, M.; Jardine, M. M.; Fares, R.; Cameron, A. C.; Ménard, F.; Dougados, C.; Herczeg, G.; Matysse Collaboration
2014-11-01
We report results of a spectropolarimetric and photometric monitoring of the weak-line T Tauri star LkCa 4 within the Magnetic Topologies of Young Stars and the Survival of close-in giant Exoplanets (MaTYSSE) programme, involving ESPaDOnS at the Canada-France-Hawaii Telescope. Despite an age of only 2 Myr and a similarity with prototypical classical T Tauri stars, LkCa 4 shows no evidence for accretion and probes an interesting transition stage for star and planet formation. Large profile distortions and Zeeman signatures are detected in the unpolarized and circularly polarized lines of LkCa 4 using Least-Squares Deconvolution (LSD), indicating the presence of brightness inhomogeneities and magnetic fields at the surface of LkCa 4. Using tomographic imaging, we reconstruct brightness and magnetic maps of LkCa 4 from sets of unpolarized and circularly polarized LSD profiles. The large-scale field is strong and mainly axisymmetric, featuring a ≃2 kG poloidal component and a ≃1 kG toroidal component encircling the star at equatorial latitudes - the latter making LkCa 4 markedly different from classical T Tauri stars of similar mass and age. The brightness map includes a dark spot overlapping the magnetic pole and a bright region at mid-latitudes - providing a good match to the contemporaneous photometry. We also find that differential rotation at the surface of LkCa 4 is small, typically ≃5.5 times weaker than that of the Sun, and compatible with solid-body rotation. Using our tomographic modelling, we are able to filter out the activity jitter in the radial velocity curve of LkCa 4 (of full amplitude 4.3 km s-1) down to an rms precision of 0.055 km s-1. Looking for hot Jupiters around young Sun-like stars thus appears feasible, even though we find no evidence for such planets around LkCa 4.
Ortega-Ojeda, Fernando; Calcerrada, Matías; Ferrero, Alejandro; Campos, Joaquín; Garcia-Ruiz, Carmen
2018-04-10
Ultra-weak photon emission (UPE) is the spontaneous emission from living systems mainly attributed to oxidation reactions, in which reactive oxygen species (ROS) may play a major role. Given the capability of the next-generation electron-multiplying CCD (EMCCD) sensors and the easy use of liquid crystal tunable filters (LCTF), the aim of this work was to explore the potential of a simple UPE spectrometer to measure the UPE from a human hand. Thus, an easy setup was configured based on a dark box for inserting the subject's hand prior to LCTF as a monochromator and an EMCCD sensor working in the full vertical binning mode (FVB) as a spectra detector. Under controlled conditions, both dark signals and left hand UPE were acquired by registering the UPE intensity at different selected wavelengths (400, 450, 500, 550, 600, 650, and 700 nm) during a period of 10 min each. Then, spurious signals were filtered out by ignoring the pixels whose values were clearly outside of the Gaussian distribution, and the dark signal was subtracted from the subject hand signal. The stepped spectrum with a peak of approximately 880 photons at 500 nm had a shape that agreed somewhat with previous reports, and agrees with previous UPE research that reported UPE from 420 to 570 nm, or 260 to 800 nm, with a range from 1 to 1000 photons s -1 cm -2 . Obtaining the spectral distribution instead of the total intensity of the UPE represents a step forward in this field, as it may provide extra information about a subject's personal states and relationship with ROS. A new generation of CCD sensors with lower dark signals, and spectrographs with a more uniform spectral transmittance, will open up new possibilities for configuring measuring systems in portable formats.
Sun, Fujun; Fu, Zhongyuan; Wang, Chunhong; Ding, Zhaoxiang; Wang, Chao; Tian, Huiping
2017-05-20
We propose and investigate an ultra-compact air-mode photonic crystal nanobeam cavity (PCNC) with an ultra-high quality factor-to-mode volume ratio (Q/V) by quadratically tapering the lattice space of the rectangular holes from the center to both ends while other parameters remain unchanged. By using the three-dimensional finite-difference time-domain method, an optimized geometry yields a Q of 7.2×10 6 and a V∼1.095(λ/n Si ) 3 in simulations, resulting in an ultra-high Q/V ratio of about 6.5×10 6 (λ/n Si ) -3 . When the number of holes on either side is 8, the cavity possesses a high sensitivity of 252 nm/RIU (refractive index unit), a high calculated Q-factor of 1.27×10 5 , and an ultra-small effective V of ∼0.758(λ/n Si ) 3 at the fundamental resonant wavelength of 1521.74 nm. Particularly, the footprint is only about 8×0.7 μm 2 . However, inevitably our proposed PCNC has several higher-order resonant modes in the transmission spectrum, which makes the PCNC difficult to be used for multiplexed sensing. Thus, a well-designed bandstop filter with weak sidelobes and broad bandwidth based on a photonic crystal nanobeam waveguide is created to connect with the PCNC to filter out the high-order modes. Therefore, the integrated structure presented in this work is promising for building ultra-compact lab-on-chip sensor arrays with high density and parallel-multiplexing capability.
"Ersatz" and "hybrid" NMR spectral estimates using the filter diagonalization method.
Ridge, Clark D; Shaka, A J
2009-03-12
The filter diagonalization method (FDM) is an efficient and elegant way to make a spectral estimate purely in terms of Lorentzian peaks. As NMR spectral peaks of liquids conform quite well to this model, the FDM spectral estimate can be accurate with far fewer time domain points than conventional discrete Fourier transform (DFT) processing. However, noise is not efficiently characterized by a finite number of Lorentzian peaks, or by any other analytical form, for that matter. As a result, noise can affect the FDM spectrum in different ways than it does the DFT spectrum, and the effect depends on the dimensionality of the spectrum. Regularization to suppress (or control) the influence of noise to give an "ersatz", or EFDM, spectrum is shown to sometimes miss weak features, prompting a more conservative implementation of filter diagonalization. The spectra obtained, called "hybrid" or HFDM spectra, are acquired by using regularized FDM to obtain an "infinite time" spectral estimate and then adding to it the difference between the DFT of the data and the finite time FDM estimate, over the same time interval. HFDM has a number of advantages compared to the EFDM spectra, where all features must be Lorentzian. They also show better resolution than DFT spectra. The HFDM spectrum is a reliable and robust way to try to extract more information from noisy, truncated data records and is less sensitive to the choice of regularization parameter. In multidimensional NMR of liquids, HFDM is a conservative way to handle the problems of noise, truncation, and spectral peaks that depart significantly from the model of a multidimensional Lorentzian peak.
Weak-lensing detection of intracluster filaments with ground-based data
NASA Astrophysics Data System (ADS)
Maturi, Matteo; Merten, Julian
2013-11-01
According to the current standard model of cosmology, matter in the Universe arranges itself along a network of filamentary structure. These filaments connect the main nodes of this so-called "cosmic web", which are clusters of galaxies. Although its large-scale distribution is clearly characterized by numerical simulations, constraining the dark-matter content of the cosmic web in reality turns out to be difficult. The natural method of choice is gravitational lensing. However, the direct detection and mapping of the elusive filament signal is challenging and in this work we present two methods that are specifically tailored to achieve this task. A linear matched filter aims at detecting the smooth mass-component of filaments and is optimized to perform a shear decomposition that follows the anisotropic component of the lensing signal. Filaments clearly inherit this property due to their morphology. At the same time, the contamination arising from the central massive cluster is controlled in a natural way. The filament 1σ detection is of about κ ~ 0.01 - 0.005 according to the filter's template width and length, enabling the detection of structures beyond reach with other approaches. The second, complementary method seeks to detect the clumpy component of filaments. The detection is determined by the number density of subclump identifications in an area enclosing the potential filament, as was found within the observed field with the filter approach. We tested both methods against mocked observations based on realistic N-body simulations of filamentary structure and proved the feasibility of detecting filaments with ground-based data.
Design of a 32-Channel EEG System for Brain Control Interface Applications
Wang, Ching-Sung
2012-01-01
This study integrates the hardware circuit design and the development support of the software interface to achieve a 32-channel EEG system for BCI applications. Since the EEG signals of human bodies are generally very weak, in addition to preventing noise interference, it also requires avoiding the waveform distortion as well as waveform offset and so on; therefore, the design of a preamplifier with high common-mode rejection ratio and high signal-to-noise ratio is very important. Moreover, the friction between the electrode pads and the skin as well as the design of dual power supply will generate DC bias which affects the measurement signals. For this reason, this study specially designs an improved single-power AC-coupled circuit, which effectively reduces the DC bias and improves the error caused by the effects of part errors. At the same time, the digital way is applied to design the adjustable amplification and filter function, which can design for different EEG frequency bands. For the analog circuit, a frequency band will be taken out through the filtering circuit and then the digital filtering design will be used to adjust the extracted frequency band for the target frequency band, combining with MATLAB to design man-machine interface for displaying brain wave. Finally the measured signals are compared to the traditional 32-channel EEG signals. In addition to meeting the IFCN standards, the system design also conducted measurement verification in the standard EEG isolation room in order to demonstrate the accuracy and reliability of this system design. PMID:22778545
Design of a 32-channel EEG system for brain control interface applications.
Wang, Ching-Sung
2012-01-01
This study integrates the hardware circuit design and the development support of the software interface to achieve a 32-channel EEG system for BCI applications. Since the EEG signals of human bodies are generally very weak, in addition to preventing noise interference, it also requires avoiding the waveform distortion as well as waveform offset and so on; therefore, the design of a preamplifier with high common-mode rejection ratio and high signal-to-noise ratio is very important. Moreover, the friction between the electrode pads and the skin as well as the design of dual power supply will generate DC bias which affects the measurement signals. For this reason, this study specially designs an improved single-power AC-coupled circuit, which effectively reduces the DC bias and improves the error caused by the effects of part errors. At the same time, the digital way is applied to design the adjustable amplification and filter function, which can design for different EEG frequency bands. For the analog circuit, a frequency band will be taken out through the filtering circuit and then the digital filtering design will be used to adjust the extracted frequency band for the target frequency band, combining with MATLAB to design man-machine interface for displaying brain wave. Finally the measured signals are compared to the traditional 32-channel EEG signals. In addition to meeting the IFCN standards, the system design also conducted measurement verification in the standard EEG isolation room in order to demonstrate the accuracy and reliability of this system design.
Arciuli, Joanne
2017-05-01
This study reports on a new task for assessing children's sensitivity to lexical stress for words with different stress patterns and demonstrates that this task is useful in examining predictors of reading accuracy during the elementary years. In English, polysyllabic words beginning with a strong syllable exhibit the most common or dominant pattern of lexical stress (e.g., "coconut"), whereas polysyllabic words beginning with a weak syllable exhibit a less common non-dominant pattern (e.g., "banana"). The new Aliens Talking Underwater task assesses children's ability to match low-pass filtered recordings of words to pictures of objects. Via filtering, phonetic detail is removed but prosodic contour information relating to lexical stress is retained. In a series of two-alternative forced choice trials, participants see a picture and are asked to choose which of two filtered recordings matches the name of that picture; one recording exhibits the correct lexical stress of the target word, and the other recording reverses the pattern of stress over the initial two syllables of the target word rendering it incorrect. Target words exhibit either dominant stress or non-dominant stress. Analysis of data collected from 192 typically developing children aged 5 to 12years revealed that sensitivity to non dominant lexical stress was a significant predictor of reading accuracy even when age and phonological awareness were taken into account. A total of 76.3% of variance in children's reading accuracy was explained by these variables. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
FIRST RESULTS FROM Z -FOURGE : DISCOVERY OF A CANDIDATE CLUSTER AT z = 2.2 IN COSMOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spitler, Lee R.; Glazebrook, Karl; Poole, Gregory B.
2012-04-01
We report the first results from the Z -FOURGE survey: the discovery of a candidate galaxy cluster at z = 2.2 consisting of two compact overdensities with red galaxies detected at {approx}> 20{sigma} above the mean surface density. The discovery was made possible by a new deep (K{sub s} {approx}< 24.8 AB 5{sigma}) Magellan/FOURSTAR near-IR imaging survey with five custom medium-bandwidth filters. The filters pinpoint the location of the Balmer/4000 A break in evolved stellar populations at 1.5 < z < 3.5, yielding significantly more accurate photometric redshifts than possible with broadband imaging alone. The overdensities are within 1' ofmore » each other in the COSMOS field and appear to be embedded in a larger structure that contains at least one additional overdensity ({approx}10{sigma}). Considering the global properties of the overdensities, the z = 2.2 system appears to be the most distant example of a galaxy cluster with a population of red galaxies. A comparison to a large {Lambda}CDM simulation suggests that the system may consist of merging subclusters, with properties in between those of z > 2 protoclusters with more diffuse distributions of blue galaxies and the lower-redshift galaxy clusters with prominent red sequences. The structure is completely absent in public optical catalogs in COSMOS and only weakly visible in a shallower near-IR survey. The discovery showcases the potential of deep near-IR surveys with medium-band filters to advance the understanding of environment and galaxy evolution at z > 1.5.« less
The pivotal role of perceived scientific consensus in acceptance of science
NASA Astrophysics Data System (ADS)
Lewandowsky, Stephan; Gignac, Gilles E.; Vaughan, Samuel
2013-04-01
Although most experts agree that CO2 emissions are causing anthropogenic global warming (AGW), public concern has been declining. One reason for this decline is the `manufacture of doubt' by political and vested interests, which often challenge the existence of the scientific consensus. The role of perceived consensus in shaping public opinion is therefore of considerable interest: in particular, it is unknown whether consensus determines people's beliefs causally. It is also unclear whether perception of consensus can override people's `worldviews', which are known to foster rejection of AGW. Study 1 shows that acceptance of several scientific propositions--from HIV/AIDS to AGW--is captured by a common factor that is correlated with another factor that captures perceived scientific consensus. Study 2 reveals a causal role of perceived consensus by showing that acceptance of AGW increases when consensus is highlighted. Consensus information also neutralizes the effect of worldview.
Fisher, Jacob C.
2017-01-01
Virtually all social diffusion work relies on a common formal basis, which predicts that consensus will develop among a connected population as the result of diffusion. In spite of the popularity of social diffusion models that predict consensus, few empirical studies examine consensus, or a clustering of attitudes, directly. Those that do either focus on the coordinating role of strict hierarchies, or on the results of online experiments, and do not consider how consensus occurs among groups in situ. This study uses longitudinal data on adolescent social networks to show how meso-level social structures, such as informal peer groups, moderate the process of consensus formation. Using a novel method for controlling for selection into a group, I find that centralized peer groups, meaning groups with clear leaders, have very low levels of consensus, while cohesive peer groups, meaning groups where more ties hold the members of the group together, have very high levels of consensus. This finding is robust to two different measures of cohesion and consensus. This suggests that consensus occurs either through central leaders’ enforcement or through diffusion of attitudes, but that central leaders have limited ability to enforce when people can leave the group easily. PMID:29335675
Scene text detection via extremal region based double threshold convolutional network classification
Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan
2017-01-01
In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013. PMID:28820891
Effect of weak measurement on entanglement distribution over noisy channels.
Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C H
2016-03-03
Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence.
Effect of weak measurement on entanglement distribution over noisy channels
Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C. H.
2016-01-01
Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence. PMID:26935775
The historical development of the magnetic method in exploration
Nabighian, M.N.; Grauch, V.J.S.; Hansen, R.O.; LaFehr, T.R.; Li, Y.; Peirce, J.W.; Phillips, J.D.; Ruder, M.E.
2005-01-01
The magnetic method, perhaps the oldest of geophysical exploration techniques, blossomed after the advent of airborne surveys in World War II. With improvements in instrumentation, navigation, and platform compensation, it is now possible to map the entire crustal section at a variety of scales, from strongly magnetic basement at regional scale to weakly magnetic sedimentary contacts at local scale. Methods of data filtering, display, and interpretation have also advanced, especially with the availability of low-cost, high-performance personal computers and color raster graphics. The magnetic method is the primary exploration tool in the search for minerals. In other arenas, the magnetic method has evolved from its sole use for mapping basement structure to include a wide range of new applications, such as locating intrasedimentary faults, defining subtle lithologic contacts, mapping salt domes in weakly magnetic sediments, and better defining targets through 3D inversion. These new applications have increased the method's utility in all realms of exploration - in the search for minerals, oil and gas, geothermal resources, and groundwater, and for a variety of other purposes such as natural hazards assessment, mapping impact structures, and engineering and environmental studies. ?? 2005 Society of Exploration Geophysicists. All rights reserved.
Shengelia, Lela; Pavlova, Milena; Groot, Wim
2017-08-08
The improvement of maternal health has been one of the aims of the health financing reforms in Georgia. Public-private relationships are the most notable part of the reform. This study aimed to assess the strengths and weakness of the maternal care financing in Georgia in terms of adequacy and effects. A qualitative design was used to explore the opinions of key stakeholders about the adequacy of maternal care financing and financial protection of pregnant women in Georgia. Women who had used maternal care during the past 4 years along with health care providers, policy makers, and representatives of international partner organizations and national professional body were the respondents in this study. Six focus group discussions to collect data from women and 15 face-to-face in-depth interviews to collect data from the other stakeholders were conducted. Each focus group discussion consisted of 7-8 women. Two focus group discussions were carried out at each of the target settings (i.e. Tbilisi, Imereti and Adjara). Women were selected in each location through the hospital registry and snowballing method. The evidence shows that there is a consensus among maternal care stakeholder groups on the influence of the healthcare financing reforms on maternal health. Specifically, the privatization of the maternal care services has had positive effects because it significantly improved the environment and technical capacity of the maternity houses. Also, in contrast to other former-soviet republics, there are no informal payments anymore for maternal care in Georgia. However the privatization, which was done without strict regulation, negatively influenced the reform process and provided the possibility to private providers to manipulate the formal user fees in maternal care. Stakeholders also indicated that the UHC programs implemented at the last stage of the healthcare financing reform as well as other state maternal health programs protect women from catastrophic health care expenditure. The results suggest a consensus among stakeholders on the influence of the healthcare financing reform on maternal healthcare. The total privatization of the maternal care services has had positive effects because it significantly improved the environment and the technical capacity of the maternity house. However, the aim to improve maternal health and to reduce maternal mortality was not fully achieved. Financial protection of mothers should be further studied to identify vulnerable groups who should be targeted in future programs.
Coordinated single-phase control scheme for voltage unbalance reduction in low voltage network.
Pullaguram, Deepak; Mishra, Sukumar; Senroy, Nilanjan
2017-08-13
Low voltage (LV) distribution systems are typically unbalanced in nature due to unbalanced loading and unsymmetrical line configuration. This situation is further aggravated by single-phase power injections. A coordinated control scheme is proposed for single-phase sources, to reduce voltage unbalance. A consensus-based coordination is achieved using a multi-agent system, where each agent estimates the averaged global voltage and current magnitudes of individual phases in the LV network. These estimated values are used to modify the reference power of individual single-phase sources, to ensure system-wide balanced voltages and proper power sharing among sources connected to the same phase. Further, the high X / R ratio of the filter, used in the inverter of the single-phase source, enables control of reactive power, to minimize voltage unbalance locally. The proposed scheme is validated by simulating a LV distribution network with multiple single-phase sources subjected to various perturbations.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Effects of solar radiation on hair and photoprotection.
Dario, Michelli F; Baby, André R; Velasco, Maria Valéria R
2015-12-01
In this paper the negative effects of solar radiation (ultraviolet, visible and infrared wavelengths) on hair properties like color, mechanical properties, luster, protein content, surface roughness, among others, will be discussed. Despite knowing that radiation damages hair, there are no consensus about the particular effect of each segment of solar radiation on the hair shaft. The hair photoprotection products are primarily targeted to dyed hair, specially auburn pigments, and gray shades. They are usually based on silicones, antioxidants and quaternary chemical UV filters that have more affinity for negatively charged hair surface and present higher efficacy. Unfortunately, there are no regulated parameters, like for skin photoprotection, for efficacy evaluation of hair care products, which makes impossible to compare the results published in the literature. Thus, it is important that researchers make an effort to apply experimental conditions similar to a real level of sun exposure, like dose, irradiance, time, temperature and relative humidity. Copyright © 2015 Elsevier B.V. All rights reserved.
Secondhand Smoke in the Operating Room? Precautionary Practices Lacking for Surgical Smoke
Steege, Andrea L.; Boiano, James M.; Sweeney, Marie H.
2016-01-01
Background Consensus organizations, government bodies, and healthcare organization guidelines recommend that surgical smoke be evacuated at the source by local exhaust ventilation (LEV) (i.e., smoke evacuators or wall suctions with inline filters). Methods Data are from NIOSH’s Health and Safety Practices Survey of Healthcare Workers module on precautionary practices for surgical smoke. Results Four thousand five hundred thirty-three survey respondents reported exposure to surgical smoke: 4,500 during electrosurgery; 1,392 during laser surgery procedures. Respondents were mainly nurses (56%) and anesthesiologists (21%). Only 14% of those exposed during electrosurgery reported LEV was always used during these procedures, while 47% reported use during laser surgery. Those reporting LEV was always used were also more likely to report training and employer standard procedures addressing the hazards of surgical smoke. Few respondents reported use of respiratory protection. Conclusions Study findings can be used to raise awareness of the marginal use of exposure controls and impediments for their use. PMID:27282626
Constructive conflict and staff consensus in substance abuse treatment.
Melnick, Gerald; Wexler, Harry K; Chaple, Michael; Cleland, Charles M
2009-03-01
Previous studies demonstrated the relationship between consensus among both staff and clients with client engagement in treatment and between client consensus and 1-year treatment outcomes. The present article explores the correlates of staff consensus, defined as the level of agreement among staff as to the importance of treatment activities in their program, using a national sample of 80 residential substance abuse treatment programs. Constructive conflict resolution had the largest effect on consensus. Low client-to-staff ratios, staff education, and staff experience in substance abuse treatment were also significantly related to consensus. Frequency of training, an expected correlate of consensus, was negatively associated with consensus, whereas frequency of supervision was not a significant correlate. The implications of the findings for future research and program improvement are discussed.
NASA Astrophysics Data System (ADS)
Cook, J.; Jacobs, P.; Nuccitelli, D.
2014-12-01
Laypeople use expert opinion as a mental shortcut to form views on complex scientific issues. This heuristic is particularly relevant in the case of climate change, where perception of consensus is one of the main predictors of public support for climate action. A low public perception of consensus (around 60% compared to the actual 97% consensus) is a significant stumbling block to meaningful climate action, underscoring the importance of closing the "consensus gap". However, some scientists question the efficacy or appropriateness of emphasizing consensus in climate communication. I'll summarize the social science research examining the importance and effectiveness of consensus messaging. I'll also present several case studies of consensus messaging employed by the team of communicators at the Skeptical Science website.
Kim, Sung Sun; Kook, Myeong-Cherl; Shin, Ok-Ran; Kim, Hee Sung; Bae, Han-Ik; Seo, An Na; Park, Do Youn; Choi, Il Ju; Kim, Young-Il; Nam, Byung Ho; Kim, Sohee
2018-04-01
Intestinal metaplasia and atrophy of the gastric mucosa are associated with Helicobacter pylori infection and are considered premalignant lesions. The updated Sydney system is used for these parameters, but experienced pathologists and consensus processes are required for interobserver agreement. We sought to determine the influence of the consensus process on the assessment of intestinal metaplasia and atrophy. Two study sets were used: consensus and validation. The consensus set was circulated and five gastrointestinal pathologists evaluated them independently using the updated Sydney system. The consensus of the definitions was then determined at the first consensus meeting. The same set was recirculated to determine the effect of the consensus. The second consensus meeting was held to standardise the grading criteria and the validation set was circulated to determine the influence. Two additional circulations were performed to assess the maintainance of consensus and intraobserver variability. Interobserver agreement of intestinal metaplasia and atrophy was improved through the consensus process (intestinal metaplasia: baseline κ = 0.52 versus final κ = 0.68, P = 0.006; atrophy: baseline κ = 0.19 versus final κ = 0.43, P < 0.001). Higher interobserver agreement in atrophy was observed after consensus regarding the definition (pre-consensus: κ = 0.19 versus post-consensus: κ = 0.34, P = 0.001). There was improved interobserver agreement in intestinal metaplasia after standardisation of the grading criteria (pre-standardisation: κ = 0.56 versus post-standardisation: κ = 0.71, P = 0.010). This study suggests that interobserver variability regarding intestinal metaplasia and atrophy may result from lack of a precise definition and fine criteria, and can be reduced by consensus of definition and standardisation of grading criteria. © 2017 John Wiley & Sons Ltd.
Kozutsumi, Daisuke; Tsunematsu, Masako; Yamaji, Taketo; Kino, Kohsuke
2007-01-01
Cry-consensus peptide is a linearly linked peptide of T-cell epitopes for the management of Japanese cedar (JC) pollinosis and is expected to become a new drug for immunotherapy. However, the mechanism of T-cell epitopes in allergic diseases is not well understood, and thus, a simple in vitro procedure for evaluation of its biological activity is desired. Peripheral blood mononuclear cells (PBMC) were isolated from 27 JC pollinosis patients and 10 healthy subjects, and cultured in vitro for 4 days in the presence of Cry-consensus peptide and (3)H-thymidine. The relationship between growth stimulation (stimulation index; SI) and antigen-specific IgE levels in serum was also investigated in JC pollinosis patients. Moreover, to confirm the importance of the primary sequence in Cry-consensus peptide, heat-treated Cry-consensus peptide and a mixture of the amino acids of which Cry-consensus peptide is composed, and their (3)H-thymidine uptake was compared with Cry-consensus peptide. Finally, whether Cry-consensus peptide stimulates PBMCs from healthy subjects was investigated. The mean SI of JC patients showed a good correlation with Cry-consensus peptide concentration in the culture medium; however, the SI was independent of the anti-Cry j 1 IgE level. Heat-denatured Cry-consensus peptide retained a PBMC proliferation stimulatory effect comparable to the original Cry-consensus peptide, while the mixture of amino acids constituting Cry-consensus peptide did not stimulate PBMC proliferation. PBMCs from healthy subjects did not respond to Cry-consensus peptide at all. These data indicate that the PBMC response of patients suffering from JC pollinosis to Cry-consensus peptide is specific for the sequence of T cell epitopes thereof and may be useful for the evaluation of the efficacy of Cry-consensus peptide in vivo.
Consensus on consensus: a synthesis of consensus estimates on human-caused global warming
NASA Astrophysics Data System (ADS)
Cook, John; Oreskes, Naomi; Doran, Peter T.; Anderegg, William R. L.; Verheggen, Bart; Maibach, Ed W.; Carlton, J. Stuart; Lewandowsky, Stephan; Skuce, Andrew G.; Green, Sarah A.; Nuccitelli, Dana; Jacobs, Peter; Richardson, Mark; Winkler, Bärbel; Painting, Rob; Rice, Ken
2016-04-01
The consensus that humans are causing recent global warming is shared by 90%-100% of publishing climate scientists according to six independent studies by co-authors of this paper. Those results are consistent with the 97% consensus reported by Cook et al (Environ. Res. Lett. 8 024024) based on 11 944 abstracts of research papers, of which 4014 took a position on the cause of recent global warming. A survey of authors of those papers (N = 2412 papers) also supported a 97% consensus. Tol (2016 Environ. Res. Lett. 11 048001) comes to a different conclusion using results from surveys of non-experts such as economic geologists and a self-selected group of those who reject the consensus. We demonstrate that this outcome is not unexpected because the level of consensus correlates with expertise in climate science. At one point, Tol also reduces the apparent consensus by assuming that abstracts that do not explicitly state the cause of global warming (‘no position’) represent non-endorsement, an approach that if applied elsewhere would reject consensus on well-established theories such as plate tectonics. We examine the available studies and conclude that the finding of 97% consensus in published climate research is robust and consistent with other surveys of climate scientists and peer-reviewed studies.
The East Anglian specialist registrar assessment tool
Robinson, Susan; Boursicot, Katharine; Hayhurst, Catherine
2007-01-01
Background In our region, it was acknowledged that the process of assessment needed to be improved, but before developing a system for this, there was a need to define the “competent or satisfactory trainee”. Objective To outline the process by which a consensus was achieved on this standard, and how a system for formally assessing competency across a wide range of knowledge skills and attitudes was subsequently agreed on, thus enabling increased opportunities for training and feedback and improving the accuracy of assessment in the region. Methods The opinions of trainees and trainers from across the region were collated, and a consensus was achieved with regard to the minimum acceptable standard for a trainee in emergency medicine, thus defining a competent trainee. The group that set the standard then focused on identifying the assessment methods most appropriate for the evaluation of the knowledge, skills and attitudes required of an emergency medicine trainee. The tool was subsequently trialled for a period of 6 months, and opinion evaluated by use of a questionnaire. Results The use of the tool was reviewed from both the trainers' and trainees' perspectives. 42% (n = 11) of trainers and 31% (n = 8) trainees responded to the questionnaire. In the region, there were 26 trainers and 26 trainees. Five trainees and nine trainers had used the tool. 93% (14/15) of respondents thought that the descriptors used to describe the satisfactory trainee were acceptable; 89% (8/9) of trainers thought that it helped them assess trainees more accurately. 60% (3/5) of trainees thought that, as a result, they had a better understanding of their weak areas. Conclusion We believe that we achieved a consensus across our region as to what defined a satisfactory trainee and set the standard against which all our trainees would subsequently be evaluated. The use of this tool to assess trainees during the pilot period was disappointing; however, we were encouraged that most of those using the tool thought that it allowed an objective assessment of trainees and feedback on areas requiring further work. Those who used the tool identified important reasons that may have hindered widespread use of the assessment tool. PMID:17351222
Guidelines for patient selection and performance of carotid artery stenting.
Bladin, Christopher; Chambers, Brian; New, Gishel; Denton, Michael; Lawrence-Brown, Michael
2010-06-01
The endovascular treatment of carotid atherosclerosis with carotid artery stenting (CAS) remains controversial. Carotid endarterectomy remains the benchmark in terms of procedural mortality and morbidity. At present, there are no consensus Australasian guidelines for the safe performance of CAS. We applied a modified Delphi consensus method of iterative consultation between the College representatives on the Carotid Stenting Guidelines Committee (CSGC). Selection of patients suitable for CAS needs careful consideration of clinical and patho-anatomical criteria and cannot be directly extrapolated from clinical indicators for carotid endarterectomy (CEA). Randomized controlled trials (including pooled analyses of results) comparing CAS with CEA for treatment of symptomatic stenosis have demonstrated that CAS is more hazardous than CEA. On current evidence, the CGSC therefore recommends that CAS should not be performed in the majority of patients requiring carotid revascularisation. The evidence for CAS in patients with symptomatic severe carotid stenosis who are considered medically high risk is weak, and there is currently no evidence to support CAS as a treatment for asymptomatic carotid stenosis. The use of distal protection devices during CAS remains controversial with increased risk of clinically silent stroke. The knowledge requirements for the safe performance of CAS include an understanding of the evidence base from randomized controlled trials, carotid and aortic arch anatomy and pathology, clinical stroke syndromes, the differing treatment options for stroke and carotid atherosclerosis, and recognition and management of periprocedural complications. It is critical that all patients being considered for a carotid intervention have adequate pre-procedural neuro-imaging and an independent, standardized neurological assessment before and after the procedure. Maintenance of proficiency in CAS requires active involvement in surgical/endovascular audit and continuing medical education programs. These standards should apply in the public and private health care settings. These guidelines represent the consensus of an inter-collegiate committee in order to direct appropriate patient selection and the range of cognitive and technical requirements to perform CAS. Advances in endovascular technologies and the results of randomized controlled trials will guide future revisions of these guidelines.
An interactive method based on the live wire for segmentation of the breast in mammography images.
Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu
2014-01-01
In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.
Wavepacket dynamics in one-dimensional system with long-range correlated disorder
NASA Astrophysics Data System (ADS)
Yamada, Hiroaki S.
2018-03-01
We numerically investigate dynamical property in the one-dimensional tight-binding model with long-range correlated disorder having power spectrum 1 /fα (α: spectrum exponent) generated by Fourier filtering method. For relatively small α <αc (=2) time-dependence of mean square displacement (MSD) of the initially localized wavepacket shows ballistic spread and localizes as time elapses. It is shown that α-dependence of the dynamical localization length determined by the MSD exhibits a simple scaling law in the localization regime for the relatively weak disorder strength W. Furthermore, scaled MSD by the dynamical localization length almost obeys an universal function from the ballistic to the localization regime in the various combinations of the parameters α and W.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.
Yokoyama, Jun'ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.
NASA Astrophysics Data System (ADS)
Nielsen, N. C.; Bildsøe, H.; Jakobsen, H. J.; Levitt, M. H.
1994-08-01
We describe an efficient method for the recovery of homonuclear dipole-dipole interactions in magic-angle spinning NMR. Double-quantum homonuclear rotary resonance (2Q-HORROR) is established by fulfilling the condition ωr=2ω1, where ωr is the sample rotation frequency and ω1 is the nutation frequency around an applied resonant radio frequency (rf) field. This resonance can be used for double-quantum filtering and measurement of homonuclear dipolar interactions in the presence of magic-angle spinning. The spin dynamics depend only weakly on crystallite orientation allowing good performance for powder samples. Chemical shift effects are suppressed to zeroth order. The method is demonstrated for singly and doubly 13C labeled L-alanine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salje, E. K. H.; Dul'kin, E.; Roth, M.
2015-04-13
Acoustic emission (AE) spectroscopy without frequency filtering (∼broadband AE) and moderate time integration is shown to be sensitive enough to allow the investigation of subtle nano-structural changes in ferroelectric BaTiO{sub 3} and ferroelastic Pb{sub 3}(PO{sub 4}){sub 2}. AE signals during weak phase transitions are compatible with avalanche statistics as observed previously in large-strain systems. While the data are too sparse to determine avalanche exponents, they are well suited to determine other thermodynamic parameters such as transition temperatures and critical stresses.
Coffee, tea, and cocoa and risk of stroke.
Larsson, Susanna C
2014-01-01
Current evidence from experimental studies in animals and humans along with findings from prospective studies indicates beneficial effects of green and black tea as well as chocolate on cardiovascular health, and that tea and chocolate consumption may reduce the risk of stroke. The strongest evidence exists for beneficial effects of tea and cocoa on endothelial function, total and LDL cholesterol (tea only), and insulin sensitivity (cocoa only). The majority of prospective studies have reported a weak inverse association between moderate consumption of coffee and risk of stroke. However, there are yet no clear biological mechanisms whereby coffee might provide cardiovascular health benefits. Awaiting the results from further long-term RCTs and prospective studies, moderate consumption of filtered coffee, tea, and dark chocolate seems prudent.
Gardiner, Clare; Allen, Ruth; Moeke-Maxwell, Tess; Robinson, Jackie; Gott, Merryn
2016-12-01
The financial impact of family caregiving in a palliative care context has been identified as an issue which requires further research. However, little is known about how research should be conducted in this area. The aim of this study was to explore the opinions of family caregivers in New Zealand regarding the need to conduct research relating to the financial costs of family caregiving and to explore their perspectives on acceptable and feasible methods of data collection. A qualitative study design was adopted. Semistructured interviews were conducted with 30 family caregivers who were either currently caring for a person with palliative care needs or had done so in the past year. All participants felt that research relating to the costs of family caregiving within a palliative care context was important. There was little consensus regarding the most appropriate methods of data collection and administration. Online methods were preferred by many participants, although face-to-face methods were particularly favoured by Ma¯ori participants. Both questionnaires and cost diaries were felt to have strengths and weaknesses. Prospective longitudinal designs are likely to be most appropriate for future research, in order to capture variations in costs over time. The lack of consensus for a single preferred method makes it difficult to formulate specific recommendations regarding methods of data collection; providing participants with options for methods of completion may therefore be appropriate. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.